Dec  6 04:00:30 np0005548915 kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec  6 04:00:30 np0005548915 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  6 04:00:30 np0005548915 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  6 04:00:30 np0005548915 kernel: BIOS-provided physical RAM map:
Dec  6 04:00:30 np0005548915 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  6 04:00:30 np0005548915 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  6 04:00:30 np0005548915 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  6 04:00:30 np0005548915 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  6 04:00:30 np0005548915 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  6 04:00:30 np0005548915 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  6 04:00:30 np0005548915 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  6 04:00:30 np0005548915 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  6 04:00:30 np0005548915 kernel: NX (Execute Disable) protection: active
Dec  6 04:00:30 np0005548915 kernel: APIC: Static calls initialized
Dec  6 04:00:30 np0005548915 kernel: SMBIOS 2.8 present.
Dec  6 04:00:30 np0005548915 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  6 04:00:30 np0005548915 kernel: Hypervisor detected: KVM
Dec  6 04:00:30 np0005548915 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  6 04:00:30 np0005548915 kernel: kvm-clock: using sched offset of 3180231435 cycles
Dec  6 04:00:30 np0005548915 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  6 04:00:30 np0005548915 kernel: tsc: Detected 2799.998 MHz processor
Dec  6 04:00:30 np0005548915 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  6 04:00:30 np0005548915 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  6 04:00:30 np0005548915 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  6 04:00:30 np0005548915 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  6 04:00:30 np0005548915 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  6 04:00:30 np0005548915 kernel: Using GB pages for direct mapping
Dec  6 04:00:30 np0005548915 kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec  6 04:00:30 np0005548915 kernel: ACPI: Early table checksum verification disabled
Dec  6 04:00:30 np0005548915 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  6 04:00:30 np0005548915 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  6 04:00:30 np0005548915 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  6 04:00:30 np0005548915 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  6 04:00:30 np0005548915 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  6 04:00:30 np0005548915 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  6 04:00:30 np0005548915 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  6 04:00:30 np0005548915 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  6 04:00:30 np0005548915 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  6 04:00:30 np0005548915 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  6 04:00:30 np0005548915 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  6 04:00:30 np0005548915 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  6 04:00:30 np0005548915 kernel: No NUMA configuration found
Dec  6 04:00:30 np0005548915 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  6 04:00:30 np0005548915 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec  6 04:00:30 np0005548915 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  6 04:00:30 np0005548915 kernel: Zone ranges:
Dec  6 04:00:30 np0005548915 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  6 04:00:30 np0005548915 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  6 04:00:30 np0005548915 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  6 04:00:30 np0005548915 kernel:  Device   empty
Dec  6 04:00:30 np0005548915 kernel: Movable zone start for each node
Dec  6 04:00:30 np0005548915 kernel: Early memory node ranges
Dec  6 04:00:30 np0005548915 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  6 04:00:30 np0005548915 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  6 04:00:30 np0005548915 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  6 04:00:30 np0005548915 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  6 04:00:30 np0005548915 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  6 04:00:30 np0005548915 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  6 04:00:30 np0005548915 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  6 04:00:30 np0005548915 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  6 04:00:30 np0005548915 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  6 04:00:30 np0005548915 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  6 04:00:30 np0005548915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  6 04:00:30 np0005548915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  6 04:00:30 np0005548915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  6 04:00:30 np0005548915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  6 04:00:30 np0005548915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  6 04:00:30 np0005548915 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  6 04:00:30 np0005548915 kernel: TSC deadline timer available
Dec  6 04:00:30 np0005548915 kernel: CPU topo: Max. logical packages:   8
Dec  6 04:00:30 np0005548915 kernel: CPU topo: Max. logical dies:       8
Dec  6 04:00:30 np0005548915 kernel: CPU topo: Max. dies per package:   1
Dec  6 04:00:30 np0005548915 kernel: CPU topo: Max. threads per core:   1
Dec  6 04:00:30 np0005548915 kernel: CPU topo: Num. cores per package:     1
Dec  6 04:00:30 np0005548915 kernel: CPU topo: Num. threads per package:   1
Dec  6 04:00:30 np0005548915 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  6 04:00:30 np0005548915 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  6 04:00:30 np0005548915 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  6 04:00:30 np0005548915 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  6 04:00:30 np0005548915 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  6 04:00:30 np0005548915 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  6 04:00:30 np0005548915 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  6 04:00:30 np0005548915 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  6 04:00:30 np0005548915 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  6 04:00:30 np0005548915 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  6 04:00:30 np0005548915 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  6 04:00:30 np0005548915 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  6 04:00:30 np0005548915 kernel: Booting paravirtualized kernel on KVM
Dec  6 04:00:30 np0005548915 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  6 04:00:30 np0005548915 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  6 04:00:30 np0005548915 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  6 04:00:30 np0005548915 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  6 04:00:30 np0005548915 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  6 04:00:30 np0005548915 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec  6 04:00:30 np0005548915 kernel: random: crng init done
Dec  6 04:00:30 np0005548915 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  6 04:00:30 np0005548915 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  6 04:00:30 np0005548915 kernel: Fallback order for Node 0: 0 
Dec  6 04:00:30 np0005548915 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  6 04:00:30 np0005548915 kernel: Policy zone: Normal
Dec  6 04:00:30 np0005548915 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  6 04:00:30 np0005548915 kernel: software IO TLB: area num 8.
Dec  6 04:00:30 np0005548915 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  6 04:00:30 np0005548915 kernel: ftrace: allocating 49335 entries in 193 pages
Dec  6 04:00:30 np0005548915 kernel: ftrace: allocated 193 pages with 3 groups
Dec  6 04:00:30 np0005548915 kernel: Dynamic Preempt: voluntary
Dec  6 04:00:30 np0005548915 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  6 04:00:30 np0005548915 kernel: rcu: #011RCU event tracing is enabled.
Dec  6 04:00:30 np0005548915 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  6 04:00:30 np0005548915 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  6 04:00:30 np0005548915 kernel: #011Rude variant of Tasks RCU enabled.
Dec  6 04:00:30 np0005548915 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  6 04:00:30 np0005548915 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  6 04:00:30 np0005548915 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  6 04:00:30 np0005548915 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  6 04:00:30 np0005548915 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  6 04:00:30 np0005548915 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  6 04:00:30 np0005548915 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  6 04:00:30 np0005548915 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  6 04:00:30 np0005548915 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  6 04:00:30 np0005548915 kernel: Console: colour VGA+ 80x25
Dec  6 04:00:30 np0005548915 kernel: printk: console [ttyS0] enabled
Dec  6 04:00:30 np0005548915 kernel: ACPI: Core revision 20230331
Dec  6 04:00:30 np0005548915 kernel: APIC: Switch to symmetric I/O mode setup
Dec  6 04:00:30 np0005548915 kernel: x2apic enabled
Dec  6 04:00:30 np0005548915 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  6 04:00:30 np0005548915 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  6 04:00:30 np0005548915 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec  6 04:00:30 np0005548915 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  6 04:00:30 np0005548915 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  6 04:00:30 np0005548915 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  6 04:00:30 np0005548915 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  6 04:00:30 np0005548915 kernel: Spectre V2 : Mitigation: Retpolines
Dec  6 04:00:30 np0005548915 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  6 04:00:30 np0005548915 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  6 04:00:30 np0005548915 kernel: RETBleed: Mitigation: untrained return thunk
Dec  6 04:00:30 np0005548915 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  6 04:00:30 np0005548915 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  6 04:00:30 np0005548915 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  6 04:00:30 np0005548915 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  6 04:00:30 np0005548915 kernel: x86/bugs: return thunk changed
Dec  6 04:00:30 np0005548915 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  6 04:00:30 np0005548915 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  6 04:00:30 np0005548915 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  6 04:00:30 np0005548915 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  6 04:00:30 np0005548915 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  6 04:00:30 np0005548915 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  6 04:00:30 np0005548915 kernel: Freeing SMP alternatives memory: 40K
Dec  6 04:00:30 np0005548915 kernel: pid_max: default: 32768 minimum: 301
Dec  6 04:00:30 np0005548915 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  6 04:00:30 np0005548915 kernel: landlock: Up and running.
Dec  6 04:00:30 np0005548915 kernel: Yama: becoming mindful.
Dec  6 04:00:30 np0005548915 kernel: SELinux:  Initializing.
Dec  6 04:00:30 np0005548915 kernel: LSM support for eBPF active
Dec  6 04:00:30 np0005548915 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  6 04:00:30 np0005548915 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  6 04:00:30 np0005548915 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  6 04:00:30 np0005548915 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  6 04:00:30 np0005548915 kernel: ... version:                0
Dec  6 04:00:30 np0005548915 kernel: ... bit width:              48
Dec  6 04:00:30 np0005548915 kernel: ... generic registers:      6
Dec  6 04:00:30 np0005548915 kernel: ... value mask:             0000ffffffffffff
Dec  6 04:00:30 np0005548915 kernel: ... max period:             00007fffffffffff
Dec  6 04:00:30 np0005548915 kernel: ... fixed-purpose events:   0
Dec  6 04:00:30 np0005548915 kernel: ... event mask:             000000000000003f
Dec  6 04:00:30 np0005548915 kernel: signal: max sigframe size: 1776
Dec  6 04:00:30 np0005548915 kernel: rcu: Hierarchical SRCU implementation.
Dec  6 04:00:30 np0005548915 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  6 04:00:30 np0005548915 kernel: smp: Bringing up secondary CPUs ...
Dec  6 04:00:30 np0005548915 kernel: smpboot: x86: Booting SMP configuration:
Dec  6 04:00:30 np0005548915 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  6 04:00:30 np0005548915 kernel: smp: Brought up 1 node, 8 CPUs
Dec  6 04:00:30 np0005548915 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec  6 04:00:30 np0005548915 kernel: node 0 deferred pages initialised in 11ms
Dec  6 04:00:30 np0005548915 kernel: Memory: 7764172K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618204K reserved, 0K cma-reserved)
Dec  6 04:00:30 np0005548915 kernel: devtmpfs: initialized
Dec  6 04:00:30 np0005548915 kernel: x86/mm: Memory block size: 128MB
Dec  6 04:00:30 np0005548915 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  6 04:00:30 np0005548915 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec  6 04:00:30 np0005548915 kernel: pinctrl core: initialized pinctrl subsystem
Dec  6 04:00:30 np0005548915 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  6 04:00:30 np0005548915 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  6 04:00:30 np0005548915 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  6 04:00:30 np0005548915 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  6 04:00:30 np0005548915 kernel: audit: initializing netlink subsys (disabled)
Dec  6 04:00:30 np0005548915 kernel: audit: type=2000 audit(1765011629.353:1): state=initialized audit_enabled=0 res=1
Dec  6 04:00:30 np0005548915 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  6 04:00:30 np0005548915 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  6 04:00:30 np0005548915 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  6 04:00:30 np0005548915 kernel: cpuidle: using governor menu
Dec  6 04:00:30 np0005548915 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  6 04:00:30 np0005548915 kernel: PCI: Using configuration type 1 for base access
Dec  6 04:00:30 np0005548915 kernel: PCI: Using configuration type 1 for extended access
Dec  6 04:00:30 np0005548915 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  6 04:00:30 np0005548915 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  6 04:00:30 np0005548915 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  6 04:00:30 np0005548915 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  6 04:00:30 np0005548915 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  6 04:00:30 np0005548915 kernel: Demotion targets for Node 0: null
Dec  6 04:00:30 np0005548915 kernel: cryptd: max_cpu_qlen set to 1000
Dec  6 04:00:30 np0005548915 kernel: ACPI: Added _OSI(Module Device)
Dec  6 04:00:30 np0005548915 kernel: ACPI: Added _OSI(Processor Device)
Dec  6 04:00:30 np0005548915 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  6 04:00:30 np0005548915 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  6 04:00:30 np0005548915 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  6 04:00:30 np0005548915 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  6 04:00:30 np0005548915 kernel: ACPI: Interpreter enabled
Dec  6 04:00:30 np0005548915 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  6 04:00:30 np0005548915 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  6 04:00:30 np0005548915 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  6 04:00:30 np0005548915 kernel: PCI: Using E820 reservations for host bridge windows
Dec  6 04:00:30 np0005548915 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  6 04:00:30 np0005548915 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  6 04:00:30 np0005548915 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [3] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [4] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [5] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [6] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [7] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [8] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [9] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [10] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [11] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [12] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [13] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [14] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [15] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [16] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [17] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [18] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [19] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [20] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [21] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [22] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [23] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [24] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [25] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [26] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [27] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [28] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [29] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [30] registered
Dec  6 04:00:30 np0005548915 kernel: acpiphp: Slot [31] registered
Dec  6 04:00:30 np0005548915 kernel: PCI host bridge to bus 0000:00
Dec  6 04:00:30 np0005548915 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  6 04:00:30 np0005548915 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  6 04:00:30 np0005548915 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  6 04:00:30 np0005548915 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  6 04:00:30 np0005548915 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  6 04:00:30 np0005548915 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  6 04:00:30 np0005548915 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  6 04:00:30 np0005548915 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  6 04:00:30 np0005548915 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  6 04:00:30 np0005548915 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  6 04:00:30 np0005548915 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  6 04:00:30 np0005548915 kernel: iommu: Default domain type: Translated
Dec  6 04:00:30 np0005548915 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  6 04:00:30 np0005548915 kernel: SCSI subsystem initialized
Dec  6 04:00:30 np0005548915 kernel: ACPI: bus type USB registered
Dec  6 04:00:30 np0005548915 kernel: usbcore: registered new interface driver usbfs
Dec  6 04:00:30 np0005548915 kernel: usbcore: registered new interface driver hub
Dec  6 04:00:30 np0005548915 kernel: usbcore: registered new device driver usb
Dec  6 04:00:30 np0005548915 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  6 04:00:30 np0005548915 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  6 04:00:30 np0005548915 kernel: PTP clock support registered
Dec  6 04:00:30 np0005548915 kernel: EDAC MC: Ver: 3.0.0
Dec  6 04:00:30 np0005548915 kernel: NetLabel: Initializing
Dec  6 04:00:30 np0005548915 kernel: NetLabel:  domain hash size = 128
Dec  6 04:00:30 np0005548915 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  6 04:00:30 np0005548915 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  6 04:00:30 np0005548915 kernel: PCI: Using ACPI for IRQ routing
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  6 04:00:30 np0005548915 kernel: vgaarb: loaded
Dec  6 04:00:30 np0005548915 kernel: clocksource: Switched to clocksource kvm-clock
Dec  6 04:00:30 np0005548915 kernel: VFS: Disk quotas dquot_6.6.0
Dec  6 04:00:30 np0005548915 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  6 04:00:30 np0005548915 kernel: pnp: PnP ACPI init
Dec  6 04:00:30 np0005548915 kernel: pnp: PnP ACPI: found 5 devices
Dec  6 04:00:30 np0005548915 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  6 04:00:30 np0005548915 kernel: NET: Registered PF_INET protocol family
Dec  6 04:00:30 np0005548915 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  6 04:00:30 np0005548915 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  6 04:00:30 np0005548915 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  6 04:00:30 np0005548915 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  6 04:00:30 np0005548915 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  6 04:00:30 np0005548915 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  6 04:00:30 np0005548915 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  6 04:00:30 np0005548915 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  6 04:00:30 np0005548915 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  6 04:00:30 np0005548915 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  6 04:00:30 np0005548915 kernel: NET: Registered PF_XDP protocol family
Dec  6 04:00:30 np0005548915 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  6 04:00:30 np0005548915 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  6 04:00:30 np0005548915 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  6 04:00:30 np0005548915 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  6 04:00:30 np0005548915 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  6 04:00:30 np0005548915 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  6 04:00:30 np0005548915 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 80632 usecs
Dec  6 04:00:30 np0005548915 kernel: PCI: CLS 0 bytes, default 64
Dec  6 04:00:30 np0005548915 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  6 04:00:30 np0005548915 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  6 04:00:30 np0005548915 kernel: ACPI: bus type thunderbolt registered
Dec  6 04:00:30 np0005548915 kernel: Trying to unpack rootfs image as initramfs...
Dec  6 04:00:30 np0005548915 kernel: Initialise system trusted keyrings
Dec  6 04:00:30 np0005548915 kernel: Key type blacklist registered
Dec  6 04:00:30 np0005548915 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  6 04:00:30 np0005548915 kernel: zbud: loaded
Dec  6 04:00:30 np0005548915 kernel: integrity: Platform Keyring initialized
Dec  6 04:00:30 np0005548915 kernel: integrity: Machine keyring initialized
Dec  6 04:00:30 np0005548915 kernel: Freeing initrd memory: 87804K
Dec  6 04:00:30 np0005548915 kernel: NET: Registered PF_ALG protocol family
Dec  6 04:00:30 np0005548915 kernel: xor: automatically using best checksumming function   avx       
Dec  6 04:00:30 np0005548915 kernel: Key type asymmetric registered
Dec  6 04:00:30 np0005548915 kernel: Asymmetric key parser 'x509' registered
Dec  6 04:00:30 np0005548915 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  6 04:00:30 np0005548915 kernel: io scheduler mq-deadline registered
Dec  6 04:00:30 np0005548915 kernel: io scheduler kyber registered
Dec  6 04:00:30 np0005548915 kernel: io scheduler bfq registered
Dec  6 04:00:30 np0005548915 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  6 04:00:30 np0005548915 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  6 04:00:30 np0005548915 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  6 04:00:30 np0005548915 kernel: ACPI: button: Power Button [PWRF]
Dec  6 04:00:30 np0005548915 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  6 04:00:30 np0005548915 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  6 04:00:30 np0005548915 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  6 04:00:30 np0005548915 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  6 04:00:30 np0005548915 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  6 04:00:30 np0005548915 kernel: Non-volatile memory driver v1.3
Dec  6 04:00:30 np0005548915 kernel: rdac: device handler registered
Dec  6 04:00:30 np0005548915 kernel: hp_sw: device handler registered
Dec  6 04:00:30 np0005548915 kernel: emc: device handler registered
Dec  6 04:00:30 np0005548915 kernel: alua: device handler registered
Dec  6 04:00:30 np0005548915 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  6 04:00:30 np0005548915 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  6 04:00:30 np0005548915 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  6 04:00:30 np0005548915 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  6 04:00:30 np0005548915 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  6 04:00:30 np0005548915 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  6 04:00:30 np0005548915 kernel: usb usb1: Product: UHCI Host Controller
Dec  6 04:00:30 np0005548915 kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec  6 04:00:30 np0005548915 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  6 04:00:30 np0005548915 kernel: hub 1-0:1.0: USB hub found
Dec  6 04:00:30 np0005548915 kernel: hub 1-0:1.0: 2 ports detected
Dec  6 04:00:30 np0005548915 kernel: usbcore: registered new interface driver usbserial_generic
Dec  6 04:00:30 np0005548915 kernel: usbserial: USB Serial support registered for generic
Dec  6 04:00:30 np0005548915 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  6 04:00:30 np0005548915 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  6 04:00:30 np0005548915 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  6 04:00:30 np0005548915 kernel: mousedev: PS/2 mouse device common for all mice
Dec  6 04:00:30 np0005548915 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  6 04:00:30 np0005548915 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  6 04:00:30 np0005548915 kernel: rtc_cmos 00:04: registered as rtc0
Dec  6 04:00:30 np0005548915 kernel: rtc_cmos 00:04: setting system clock to 2025-12-06T09:00:29 UTC (1765011629)
Dec  6 04:00:30 np0005548915 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  6 04:00:30 np0005548915 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  6 04:00:30 np0005548915 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  6 04:00:30 np0005548915 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  6 04:00:30 np0005548915 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  6 04:00:30 np0005548915 kernel: usbcore: registered new interface driver usbhid
Dec  6 04:00:30 np0005548915 kernel: usbhid: USB HID core driver
Dec  6 04:00:30 np0005548915 kernel: drop_monitor: Initializing network drop monitor service
Dec  6 04:00:30 np0005548915 kernel: Initializing XFRM netlink socket
Dec  6 04:00:30 np0005548915 kernel: NET: Registered PF_INET6 protocol family
Dec  6 04:00:30 np0005548915 kernel: Segment Routing with IPv6
Dec  6 04:00:30 np0005548915 kernel: NET: Registered PF_PACKET protocol family
Dec  6 04:00:30 np0005548915 kernel: mpls_gso: MPLS GSO support
Dec  6 04:00:30 np0005548915 kernel: IPI shorthand broadcast: enabled
Dec  6 04:00:30 np0005548915 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  6 04:00:30 np0005548915 kernel: AES CTR mode by8 optimization enabled
Dec  6 04:00:30 np0005548915 kernel: sched_clock: Marking stable (1238005940, 153442775)->(1509443639, -117994924)
Dec  6 04:00:30 np0005548915 kernel: registered taskstats version 1
Dec  6 04:00:30 np0005548915 kernel: Loading compiled-in X.509 certificates
Dec  6 04:00:30 np0005548915 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  6 04:00:30 np0005548915 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  6 04:00:30 np0005548915 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  6 04:00:30 np0005548915 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  6 04:00:30 np0005548915 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  6 04:00:30 np0005548915 kernel: Demotion targets for Node 0: null
Dec  6 04:00:30 np0005548915 kernel: page_owner is disabled
Dec  6 04:00:30 np0005548915 kernel: Key type .fscrypt registered
Dec  6 04:00:30 np0005548915 kernel: Key type fscrypt-provisioning registered
Dec  6 04:00:30 np0005548915 kernel: Key type big_key registered
Dec  6 04:00:30 np0005548915 kernel: Key type encrypted registered
Dec  6 04:00:30 np0005548915 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  6 04:00:30 np0005548915 kernel: Loading compiled-in module X.509 certificates
Dec  6 04:00:30 np0005548915 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  6 04:00:30 np0005548915 kernel: ima: Allocated hash algorithm: sha256
Dec  6 04:00:30 np0005548915 kernel: ima: No architecture policies found
Dec  6 04:00:30 np0005548915 kernel: evm: Initialising EVM extended attributes:
Dec  6 04:00:30 np0005548915 kernel: evm: security.selinux
Dec  6 04:00:30 np0005548915 kernel: evm: security.SMACK64 (disabled)
Dec  6 04:00:30 np0005548915 kernel: evm: security.SMACK64EXEC (disabled)
Dec  6 04:00:30 np0005548915 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  6 04:00:30 np0005548915 kernel: evm: security.SMACK64MMAP (disabled)
Dec  6 04:00:30 np0005548915 kernel: evm: security.apparmor (disabled)
Dec  6 04:00:30 np0005548915 kernel: evm: security.ima
Dec  6 04:00:30 np0005548915 kernel: evm: security.capability
Dec  6 04:00:30 np0005548915 kernel: evm: HMAC attrs: 0x1
Dec  6 04:00:30 np0005548915 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  6 04:00:30 np0005548915 kernel: Running certificate verification RSA selftest
Dec  6 04:00:30 np0005548915 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  6 04:00:30 np0005548915 kernel: Running certificate verification ECDSA selftest
Dec  6 04:00:30 np0005548915 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  6 04:00:30 np0005548915 kernel: clk: Disabling unused clocks
Dec  6 04:00:30 np0005548915 kernel: Freeing unused decrypted memory: 2028K
Dec  6 04:00:30 np0005548915 kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec  6 04:00:30 np0005548915 kernel: Write protecting the kernel read-only data: 30720k
Dec  6 04:00:30 np0005548915 kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec  6 04:00:30 np0005548915 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  6 04:00:30 np0005548915 kernel: Run /init as init process
Dec  6 04:00:30 np0005548915 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  6 04:00:30 np0005548915 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  6 04:00:30 np0005548915 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  6 04:00:30 np0005548915 kernel: usb 1-1: Manufacturer: QEMU
Dec  6 04:00:30 np0005548915 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  6 04:00:30 np0005548915 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  6 04:00:30 np0005548915 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  6 04:00:30 np0005548915 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  6 04:00:30 np0005548915 systemd: Detected virtualization kvm.
Dec  6 04:00:30 np0005548915 systemd: Detected architecture x86-64.
Dec  6 04:00:30 np0005548915 systemd: Running in initrd.
Dec  6 04:00:30 np0005548915 systemd: No hostname configured, using default hostname.
Dec  6 04:00:30 np0005548915 systemd: Hostname set to <localhost>.
Dec  6 04:00:30 np0005548915 systemd: Initializing machine ID from VM UUID.
Dec  6 04:00:30 np0005548915 systemd: Queued start job for default target Initrd Default Target.
Dec  6 04:00:30 np0005548915 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  6 04:00:30 np0005548915 systemd: Reached target Local Encrypted Volumes.
Dec  6 04:00:30 np0005548915 systemd: Reached target Initrd /usr File System.
Dec  6 04:00:30 np0005548915 systemd: Reached target Local File Systems.
Dec  6 04:00:30 np0005548915 systemd: Reached target Path Units.
Dec  6 04:00:30 np0005548915 systemd: Reached target Slice Units.
Dec  6 04:00:30 np0005548915 systemd: Reached target Swaps.
Dec  6 04:00:30 np0005548915 systemd: Reached target Timer Units.
Dec  6 04:00:30 np0005548915 systemd: Listening on D-Bus System Message Bus Socket.
Dec  6 04:00:30 np0005548915 systemd: Listening on Journal Socket (/dev/log).
Dec  6 04:00:30 np0005548915 systemd: Listening on Journal Socket.
Dec  6 04:00:30 np0005548915 systemd: Listening on udev Control Socket.
Dec  6 04:00:30 np0005548915 systemd: Listening on udev Kernel Socket.
Dec  6 04:00:30 np0005548915 systemd: Reached target Socket Units.
Dec  6 04:00:30 np0005548915 systemd: Starting Create List of Static Device Nodes...
Dec  6 04:00:30 np0005548915 systemd: Starting Journal Service...
Dec  6 04:00:30 np0005548915 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  6 04:00:30 np0005548915 systemd: Starting Apply Kernel Variables...
Dec  6 04:00:30 np0005548915 systemd: Starting Create System Users...
Dec  6 04:00:30 np0005548915 systemd: Starting Setup Virtual Console...
Dec  6 04:00:30 np0005548915 systemd: Finished Create List of Static Device Nodes.
Dec  6 04:00:30 np0005548915 systemd: Finished Apply Kernel Variables.
Dec  6 04:00:30 np0005548915 systemd: Finished Create System Users.
Dec  6 04:00:30 np0005548915 systemd-journald[304]: Journal started
Dec  6 04:00:30 np0005548915 systemd-journald[304]: Runtime Journal (/run/log/journal/cc5c2b35ce1b4acf99067bdc7897f14e) is 8.0M, max 153.6M, 145.6M free.
Dec  6 04:00:30 np0005548915 systemd-sysusers[309]: Creating group 'users' with GID 100.
Dec  6 04:00:30 np0005548915 systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Dec  6 04:00:30 np0005548915 systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  6 04:00:30 np0005548915 systemd: Starting Create Static Device Nodes in /dev...
Dec  6 04:00:30 np0005548915 systemd: Started Journal Service.
Dec  6 04:00:30 np0005548915 systemd[1]: Starting Create Volatile Files and Directories...
Dec  6 04:00:30 np0005548915 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  6 04:00:30 np0005548915 systemd[1]: Finished Create Volatile Files and Directories.
Dec  6 04:00:30 np0005548915 systemd[1]: Finished Setup Virtual Console.
Dec  6 04:00:30 np0005548915 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  6 04:00:30 np0005548915 systemd[1]: Starting dracut cmdline hook...
Dec  6 04:00:30 np0005548915 dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Dec  6 04:00:30 np0005548915 dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  6 04:00:30 np0005548915 systemd[1]: Finished dracut cmdline hook.
Dec  6 04:00:30 np0005548915 systemd[1]: Starting dracut pre-udev hook...
Dec  6 04:00:30 np0005548915 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  6 04:00:30 np0005548915 kernel: device-mapper: uevent: version 1.0.3
Dec  6 04:00:30 np0005548915 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  6 04:00:30 np0005548915 kernel: RPC: Registered named UNIX socket transport module.
Dec  6 04:00:30 np0005548915 kernel: RPC: Registered udp transport module.
Dec  6 04:00:30 np0005548915 kernel: RPC: Registered tcp transport module.
Dec  6 04:00:30 np0005548915 kernel: RPC: Registered tcp-with-tls transport module.
Dec  6 04:00:30 np0005548915 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  6 04:00:30 np0005548915 rpc.statd[440]: Version 2.5.4 starting
Dec  6 04:00:30 np0005548915 rpc.statd[440]: Initializing NSM state
Dec  6 04:00:30 np0005548915 rpc.idmapd[445]: Setting log level to 0
Dec  6 04:00:30 np0005548915 systemd[1]: Finished dracut pre-udev hook.
Dec  6 04:00:30 np0005548915 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  6 04:00:30 np0005548915 systemd-udevd[458]: Using default interface naming scheme 'rhel-9.0'.
Dec  6 04:00:30 np0005548915 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  6 04:00:30 np0005548915 systemd[1]: Starting dracut pre-trigger hook...
Dec  6 04:00:30 np0005548915 systemd[1]: Finished dracut pre-trigger hook.
Dec  6 04:00:30 np0005548915 systemd[1]: Starting Coldplug All udev Devices...
Dec  6 04:00:30 np0005548915 systemd[1]: Created slice Slice /system/modprobe.
Dec  6 04:00:30 np0005548915 systemd[1]: Starting Load Kernel Module configfs...
Dec  6 04:00:30 np0005548915 systemd[1]: Finished Coldplug All udev Devices.
Dec  6 04:00:30 np0005548915 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  6 04:00:30 np0005548915 systemd[1]: Finished Load Kernel Module configfs.
Dec  6 04:00:30 np0005548915 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  6 04:00:30 np0005548915 systemd[1]: Reached target Network.
Dec  6 04:00:30 np0005548915 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  6 04:00:30 np0005548915 systemd[1]: Starting dracut initqueue hook...
Dec  6 04:00:30 np0005548915 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  6 04:00:30 np0005548915 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  6 04:00:31 np0005548915 kernel: vda: vda1
Dec  6 04:00:31 np0005548915 kernel: scsi host0: ata_piix
Dec  6 04:00:31 np0005548915 kernel: scsi host1: ata_piix
Dec  6 04:00:31 np0005548915 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  6 04:00:31 np0005548915 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  6 04:00:31 np0005548915 systemd-udevd[475]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 04:00:31 np0005548915 systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  6 04:00:31 np0005548915 systemd[1]: Reached target Initrd Root Device.
Dec  6 04:00:31 np0005548915 systemd[1]: Mounting Kernel Configuration File System...
Dec  6 04:00:31 np0005548915 systemd[1]: Mounted Kernel Configuration File System.
Dec  6 04:00:31 np0005548915 kernel: ata1: found unknown device (class 0)
Dec  6 04:00:31 np0005548915 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  6 04:00:31 np0005548915 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  6 04:00:31 np0005548915 systemd[1]: Reached target System Initialization.
Dec  6 04:00:31 np0005548915 systemd[1]: Reached target Basic System.
Dec  6 04:00:31 np0005548915 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  6 04:00:31 np0005548915 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  6 04:00:31 np0005548915 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  6 04:00:31 np0005548915 systemd[1]: Finished dracut initqueue hook.
Dec  6 04:00:31 np0005548915 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  6 04:00:31 np0005548915 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  6 04:00:31 np0005548915 systemd[1]: Reached target Remote File Systems.
Dec  6 04:00:31 np0005548915 systemd[1]: Starting dracut pre-mount hook...
Dec  6 04:00:31 np0005548915 systemd[1]: Finished dracut pre-mount hook.
Dec  6 04:00:31 np0005548915 systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec  6 04:00:31 np0005548915 systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Dec  6 04:00:31 np0005548915 systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  6 04:00:31 np0005548915 systemd[1]: Mounting /sysroot...
Dec  6 04:00:31 np0005548915 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  6 04:00:31 np0005548915 kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec  6 04:00:31 np0005548915 kernel: XFS (vda1): Ending clean mount
Dec  6 04:00:31 np0005548915 systemd[1]: Mounted /sysroot.
Dec  6 04:00:31 np0005548915 systemd[1]: Reached target Initrd Root File System.
Dec  6 04:00:31 np0005548915 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  6 04:00:31 np0005548915 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  6 04:00:31 np0005548915 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  6 04:00:31 np0005548915 systemd[1]: Reached target Initrd File Systems.
Dec  6 04:00:31 np0005548915 systemd[1]: Reached target Initrd Default Target.
Dec  6 04:00:31 np0005548915 systemd[1]: Starting dracut mount hook...
Dec  6 04:00:31 np0005548915 systemd[1]: Finished dracut mount hook.
Dec  6 04:00:31 np0005548915 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  6 04:00:32 np0005548915 rpc.idmapd[445]: exiting on signal 15
Dec  6 04:00:32 np0005548915 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  6 04:00:32 np0005548915 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target Network.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target Timer Units.
Dec  6 04:00:32 np0005548915 systemd[1]: dbus.socket: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  6 04:00:32 np0005548915 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target Initrd Default Target.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target Basic System.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target Initrd Root Device.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target Initrd /usr File System.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target Path Units.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target Remote File Systems.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target Slice Units.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target Socket Units.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target System Initialization.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target Local File Systems.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target Swaps.
Dec  6 04:00:32 np0005548915 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped dracut mount hook.
Dec  6 04:00:32 np0005548915 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped dracut pre-mount hook.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  6 04:00:32 np0005548915 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  6 04:00:32 np0005548915 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped dracut initqueue hook.
Dec  6 04:00:32 np0005548915 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped Apply Kernel Variables.
Dec  6 04:00:32 np0005548915 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  6 04:00:32 np0005548915 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped Coldplug All udev Devices.
Dec  6 04:00:32 np0005548915 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped dracut pre-trigger hook.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  6 04:00:32 np0005548915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped Setup Virtual Console.
Dec  6 04:00:32 np0005548915 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  6 04:00:32 np0005548915 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  6 04:00:32 np0005548915 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Closed udev Control Socket.
Dec  6 04:00:32 np0005548915 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Closed udev Kernel Socket.
Dec  6 04:00:32 np0005548915 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped dracut pre-udev hook.
Dec  6 04:00:32 np0005548915 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped dracut cmdline hook.
Dec  6 04:00:32 np0005548915 systemd[1]: Starting Cleanup udev Database...
Dec  6 04:00:32 np0005548915 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  6 04:00:32 np0005548915 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  6 04:00:32 np0005548915 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Stopped Create System Users.
Dec  6 04:00:32 np0005548915 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Finished Cleanup udev Database.
Dec  6 04:00:32 np0005548915 systemd[1]: Reached target Switch Root.
Dec  6 04:00:32 np0005548915 systemd[1]: Starting Switch Root...
Dec  6 04:00:32 np0005548915 systemd[1]: Switching root.
Dec  6 04:00:32 np0005548915 systemd-journald[304]: Journal stopped
Dec  6 04:00:32 np0005548915 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec  6 04:00:32 np0005548915 kernel: audit: type=1404 audit(1765011632.309:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  6 04:00:32 np0005548915 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 04:00:32 np0005548915 kernel: SELinux:  policy capability open_perms=1
Dec  6 04:00:32 np0005548915 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 04:00:32 np0005548915 kernel: SELinux:  policy capability always_check_network=0
Dec  6 04:00:32 np0005548915 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 04:00:32 np0005548915 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 04:00:32 np0005548915 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 04:00:32 np0005548915 kernel: audit: type=1403 audit(1765011632.452:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  6 04:00:32 np0005548915 systemd: Successfully loaded SELinux policy in 146.270ms.
Dec  6 04:00:32 np0005548915 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.741ms.
Dec  6 04:00:32 np0005548915 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  6 04:00:32 np0005548915 systemd: Detected virtualization kvm.
Dec  6 04:00:32 np0005548915 systemd: Detected architecture x86-64.
Dec  6 04:00:32 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:00:32 np0005548915 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd: Stopped Switch Root.
Dec  6 04:00:32 np0005548915 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  6 04:00:32 np0005548915 systemd: Created slice Slice /system/getty.
Dec  6 04:00:32 np0005548915 systemd: Created slice Slice /system/serial-getty.
Dec  6 04:00:32 np0005548915 systemd: Created slice Slice /system/sshd-keygen.
Dec  6 04:00:32 np0005548915 systemd: Created slice User and Session Slice.
Dec  6 04:00:32 np0005548915 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  6 04:00:32 np0005548915 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  6 04:00:32 np0005548915 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  6 04:00:32 np0005548915 systemd: Reached target Local Encrypted Volumes.
Dec  6 04:00:32 np0005548915 systemd: Stopped target Switch Root.
Dec  6 04:00:32 np0005548915 systemd: Stopped target Initrd File Systems.
Dec  6 04:00:32 np0005548915 systemd: Stopped target Initrd Root File System.
Dec  6 04:00:32 np0005548915 systemd: Reached target Local Integrity Protected Volumes.
Dec  6 04:00:32 np0005548915 systemd: Reached target Path Units.
Dec  6 04:00:32 np0005548915 systemd: Reached target rpc_pipefs.target.
Dec  6 04:00:32 np0005548915 systemd: Reached target Slice Units.
Dec  6 04:00:32 np0005548915 systemd: Reached target Swaps.
Dec  6 04:00:32 np0005548915 systemd: Reached target Local Verity Protected Volumes.
Dec  6 04:00:32 np0005548915 systemd: Listening on RPCbind Server Activation Socket.
Dec  6 04:00:32 np0005548915 systemd: Reached target RPC Port Mapper.
Dec  6 04:00:32 np0005548915 systemd: Listening on Process Core Dump Socket.
Dec  6 04:00:32 np0005548915 systemd: Listening on initctl Compatibility Named Pipe.
Dec  6 04:00:32 np0005548915 systemd: Listening on udev Control Socket.
Dec  6 04:00:32 np0005548915 systemd: Listening on udev Kernel Socket.
Dec  6 04:00:32 np0005548915 systemd: Mounting Huge Pages File System...
Dec  6 04:00:32 np0005548915 systemd: Mounting POSIX Message Queue File System...
Dec  6 04:00:32 np0005548915 systemd: Mounting Kernel Debug File System...
Dec  6 04:00:32 np0005548915 systemd: Mounting Kernel Trace File System...
Dec  6 04:00:32 np0005548915 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  6 04:00:32 np0005548915 systemd: Starting Create List of Static Device Nodes...
Dec  6 04:00:32 np0005548915 systemd: Starting Load Kernel Module configfs...
Dec  6 04:00:32 np0005548915 systemd: Starting Load Kernel Module drm...
Dec  6 04:00:32 np0005548915 systemd: Starting Load Kernel Module efi_pstore...
Dec  6 04:00:32 np0005548915 systemd: Starting Load Kernel Module fuse...
Dec  6 04:00:32 np0005548915 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  6 04:00:32 np0005548915 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd: Stopped File System Check on Root Device.
Dec  6 04:00:32 np0005548915 systemd: Stopped Journal Service.
Dec  6 04:00:32 np0005548915 systemd: Starting Journal Service...
Dec  6 04:00:32 np0005548915 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  6 04:00:32 np0005548915 systemd: Starting Generate network units from Kernel command line...
Dec  6 04:00:32 np0005548915 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  6 04:00:32 np0005548915 systemd: Starting Remount Root and Kernel File Systems...
Dec  6 04:00:32 np0005548915 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  6 04:00:32 np0005548915 systemd: Starting Apply Kernel Variables...
Dec  6 04:00:32 np0005548915 kernel: fuse: init (API version 7.37)
Dec  6 04:00:32 np0005548915 systemd: Starting Coldplug All udev Devices...
Dec  6 04:00:32 np0005548915 systemd-journald[678]: Journal started
Dec  6 04:00:32 np0005548915 systemd-journald[678]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  6 04:00:32 np0005548915 systemd[1]: Queued start job for default target Multi-User System.
Dec  6 04:00:32 np0005548915 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd: Started Journal Service.
Dec  6 04:00:32 np0005548915 systemd[1]: Mounted Huge Pages File System.
Dec  6 04:00:32 np0005548915 systemd[1]: Mounted POSIX Message Queue File System.
Dec  6 04:00:32 np0005548915 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  6 04:00:32 np0005548915 systemd[1]: Mounted Kernel Debug File System.
Dec  6 04:00:32 np0005548915 systemd[1]: Mounted Kernel Trace File System.
Dec  6 04:00:32 np0005548915 systemd[1]: Finished Create List of Static Device Nodes.
Dec  6 04:00:32 np0005548915 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Finished Load Kernel Module configfs.
Dec  6 04:00:32 np0005548915 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  6 04:00:32 np0005548915 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd[1]: Finished Load Kernel Module fuse.
Dec  6 04:00:32 np0005548915 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  6 04:00:32 np0005548915 systemd[1]: Finished Generate network units from Kernel command line.
Dec  6 04:00:32 np0005548915 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  6 04:00:32 np0005548915 systemd[1]: Finished Apply Kernel Variables.
Dec  6 04:00:32 np0005548915 kernel: ACPI: bus type drm_connector registered
Dec  6 04:00:32 np0005548915 systemd[1]: Mounting FUSE Control File System...
Dec  6 04:00:32 np0005548915 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  6 04:00:32 np0005548915 systemd[1]: Starting Rebuild Hardware Database...
Dec  6 04:00:32 np0005548915 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  6 04:00:32 np0005548915 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  6 04:00:32 np0005548915 systemd[1]: Starting Load/Save OS Random Seed...
Dec  6 04:00:32 np0005548915 systemd[1]: Starting Create System Users...
Dec  6 04:00:32 np0005548915 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  6 04:00:32 np0005548915 systemd-journald[678]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  6 04:00:32 np0005548915 systemd[1]: Finished Load Kernel Module drm.
Dec  6 04:00:32 np0005548915 systemd-journald[678]: Received client request to flush runtime journal.
Dec  6 04:00:32 np0005548915 systemd[1]: Mounted FUSE Control File System.
Dec  6 04:00:32 np0005548915 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  6 04:00:32 np0005548915 systemd[1]: Finished Load/Save OS Random Seed.
Dec  6 04:00:32 np0005548915 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  6 04:00:32 np0005548915 systemd[1]: Finished Create System Users.
Dec  6 04:00:32 np0005548915 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  6 04:00:32 np0005548915 systemd[1]: Finished Coldplug All udev Devices.
Dec  6 04:00:33 np0005548915 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  6 04:00:33 np0005548915 systemd[1]: Reached target Preparation for Local File Systems.
Dec  6 04:00:33 np0005548915 systemd[1]: Reached target Local File Systems.
Dec  6 04:00:33 np0005548915 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  6 04:00:33 np0005548915 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  6 04:00:33 np0005548915 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  6 04:00:33 np0005548915 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  6 04:00:33 np0005548915 systemd[1]: Starting Automatic Boot Loader Update...
Dec  6 04:00:33 np0005548915 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  6 04:00:33 np0005548915 systemd[1]: Starting Create Volatile Files and Directories...
Dec  6 04:00:33 np0005548915 bootctl[695]: Couldn't find EFI system partition, skipping.
Dec  6 04:00:33 np0005548915 systemd[1]: Finished Automatic Boot Loader Update.
Dec  6 04:00:33 np0005548915 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  6 04:00:33 np0005548915 systemd[1]: Finished Create Volatile Files and Directories.
Dec  6 04:00:33 np0005548915 systemd[1]: Starting Security Auditing Service...
Dec  6 04:00:33 np0005548915 systemd[1]: Starting RPC Bind...
Dec  6 04:00:33 np0005548915 systemd[1]: Starting Rebuild Journal Catalog...
Dec  6 04:00:33 np0005548915 auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  6 04:00:33 np0005548915 auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  6 04:00:33 np0005548915 systemd[1]: Finished Rebuild Journal Catalog.
Dec  6 04:00:33 np0005548915 systemd[1]: Started RPC Bind.
Dec  6 04:00:33 np0005548915 augenrules[706]: /sbin/augenrules: No change
Dec  6 04:00:33 np0005548915 augenrules[721]: No rules
Dec  6 04:00:33 np0005548915 augenrules[721]: enabled 1
Dec  6 04:00:33 np0005548915 augenrules[721]: failure 1
Dec  6 04:00:33 np0005548915 augenrules[721]: pid 701
Dec  6 04:00:33 np0005548915 augenrules[721]: rate_limit 0
Dec  6 04:00:33 np0005548915 augenrules[721]: backlog_limit 8192
Dec  6 04:00:33 np0005548915 augenrules[721]: lost 0
Dec  6 04:00:33 np0005548915 augenrules[721]: backlog 3
Dec  6 04:00:33 np0005548915 augenrules[721]: backlog_wait_time 60000
Dec  6 04:00:33 np0005548915 augenrules[721]: backlog_wait_time_actual 0
Dec  6 04:00:33 np0005548915 augenrules[721]: enabled 1
Dec  6 04:00:33 np0005548915 augenrules[721]: failure 1
Dec  6 04:00:33 np0005548915 augenrules[721]: pid 701
Dec  6 04:00:33 np0005548915 augenrules[721]: rate_limit 0
Dec  6 04:00:33 np0005548915 augenrules[721]: backlog_limit 8192
Dec  6 04:00:33 np0005548915 augenrules[721]: lost 0
Dec  6 04:00:33 np0005548915 augenrules[721]: backlog 0
Dec  6 04:00:33 np0005548915 augenrules[721]: backlog_wait_time 60000
Dec  6 04:00:33 np0005548915 augenrules[721]: backlog_wait_time_actual 0
Dec  6 04:00:33 np0005548915 augenrules[721]: enabled 1
Dec  6 04:00:33 np0005548915 augenrules[721]: failure 1
Dec  6 04:00:33 np0005548915 augenrules[721]: pid 701
Dec  6 04:00:33 np0005548915 augenrules[721]: rate_limit 0
Dec  6 04:00:33 np0005548915 augenrules[721]: backlog_limit 8192
Dec  6 04:00:33 np0005548915 augenrules[721]: lost 0
Dec  6 04:00:33 np0005548915 augenrules[721]: backlog 3
Dec  6 04:00:33 np0005548915 augenrules[721]: backlog_wait_time 60000
Dec  6 04:00:33 np0005548915 augenrules[721]: backlog_wait_time_actual 0
Dec  6 04:00:33 np0005548915 systemd[1]: Started Security Auditing Service.
Dec  6 04:00:33 np0005548915 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  6 04:00:33 np0005548915 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  6 04:00:33 np0005548915 systemd[1]: Finished Rebuild Hardware Database.
Dec  6 04:00:33 np0005548915 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  6 04:00:33 np0005548915 systemd[1]: Starting Update is Completed...
Dec  6 04:00:33 np0005548915 systemd[1]: Finished Update is Completed.
Dec  6 04:00:33 np0005548915 systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Dec  6 04:00:33 np0005548915 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  6 04:00:33 np0005548915 systemd[1]: Reached target System Initialization.
Dec  6 04:00:33 np0005548915 systemd[1]: Started dnf makecache --timer.
Dec  6 04:00:33 np0005548915 systemd[1]: Started Daily rotation of log files.
Dec  6 04:00:33 np0005548915 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  6 04:00:33 np0005548915 systemd[1]: Reached target Timer Units.
Dec  6 04:00:33 np0005548915 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  6 04:00:33 np0005548915 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  6 04:00:33 np0005548915 systemd[1]: Reached target Socket Units.
Dec  6 04:00:33 np0005548915 systemd[1]: Starting D-Bus System Message Bus...
Dec  6 04:00:33 np0005548915 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  6 04:00:33 np0005548915 systemd-udevd[738]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 04:00:33 np0005548915 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  6 04:00:33 np0005548915 systemd[1]: Starting Load Kernel Module configfs...
Dec  6 04:00:33 np0005548915 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  6 04:00:33 np0005548915 systemd[1]: Finished Load Kernel Module configfs.
Dec  6 04:00:33 np0005548915 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  6 04:00:33 np0005548915 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  6 04:00:33 np0005548915 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  6 04:00:33 np0005548915 systemd[1]: Started D-Bus System Message Bus.
Dec  6 04:00:33 np0005548915 systemd[1]: Reached target Basic System.
Dec  6 04:00:33 np0005548915 dbus-broker-lau[767]: Ready
Dec  6 04:00:33 np0005548915 systemd[1]: Starting NTP client/server...
Dec  6 04:00:33 np0005548915 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  6 04:00:33 np0005548915 chronyd[778]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  6 04:00:33 np0005548915 chronyd[778]: Loaded 0 symmetric keys
Dec  6 04:00:33 np0005548915 chronyd[778]: Using right/UTC timezone to obtain leap second data
Dec  6 04:00:33 np0005548915 chronyd[778]: Loaded seccomp filter (level 2)
Dec  6 04:00:33 np0005548915 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  6 04:00:33 np0005548915 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  6 04:00:33 np0005548915 systemd[1]: Starting IPv4 firewall with iptables...
Dec  6 04:00:33 np0005548915 systemd[1]: Started irqbalance daemon.
Dec  6 04:00:33 np0005548915 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  6 04:00:33 np0005548915 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  6 04:00:33 np0005548915 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  6 04:00:33 np0005548915 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  6 04:00:33 np0005548915 systemd[1]: Reached target sshd-keygen.target.
Dec  6 04:00:33 np0005548915 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  6 04:00:33 np0005548915 systemd[1]: Reached target User and Group Name Lookups.
Dec  6 04:00:33 np0005548915 systemd[1]: Starting User Login Management...
Dec  6 04:00:34 np0005548915 kernel: kvm_amd: TSC scaling supported
Dec  6 04:00:34 np0005548915 kernel: kvm_amd: Nested Virtualization enabled
Dec  6 04:00:34 np0005548915 kernel: kvm_amd: Nested Paging enabled
Dec  6 04:00:34 np0005548915 kernel: kvm_amd: LBR virtualization supported
Dec  6 04:00:34 np0005548915 systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  6 04:00:34 np0005548915 systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  6 04:00:34 np0005548915 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  6 04:00:34 np0005548915 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  6 04:00:34 np0005548915 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  6 04:00:34 np0005548915 kernel: Console: switching to colour dummy device 80x25
Dec  6 04:00:34 np0005548915 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  6 04:00:34 np0005548915 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  6 04:00:34 np0005548915 kernel: [drm] features: -context_init
Dec  6 04:00:34 np0005548915 kernel: [drm] number of scanouts: 1
Dec  6 04:00:34 np0005548915 kernel: [drm] number of cap sets: 0
Dec  6 04:00:34 np0005548915 systemd-logind[795]: New seat seat0.
Dec  6 04:00:34 np0005548915 systemd[1]: Started User Login Management.
Dec  6 04:00:34 np0005548915 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  6 04:00:34 np0005548915 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  6 04:00:34 np0005548915 kernel: Console: switching to colour frame buffer device 128x48
Dec  6 04:00:34 np0005548915 systemd[1]: Started NTP client/server.
Dec  6 04:00:34 np0005548915 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  6 04:00:34 np0005548915 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  6 04:00:34 np0005548915 iptables.init[785]: iptables: Applying firewall rules: [  OK  ]
Dec  6 04:00:34 np0005548915 systemd[1]: Finished IPv4 firewall with iptables.
Dec  6 04:00:34 np0005548915 cloud-init[837]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 06 Dec 2025 09:00:34 +0000. Up 6.00 seconds.
Dec  6 04:00:34 np0005548915 systemd[1]: run-cloud\x2dinit-tmp-tmplhnuvwvm.mount: Deactivated successfully.
Dec  6 04:00:34 np0005548915 systemd[1]: Starting Hostname Service...
Dec  6 04:00:34 np0005548915 systemd[1]: Started Hostname Service.
Dec  6 04:00:34 np0005548915 systemd-hostnamed[851]: Hostname set to <np0005548915.novalocal> (static)
Dec  6 04:00:34 np0005548915 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  6 04:00:34 np0005548915 systemd[1]: Reached target Preparation for Network.
Dec  6 04:00:34 np0005548915 systemd[1]: Starting Network Manager...
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8246] NetworkManager (version 1.54.1-1.el9) is starting... (boot:eb1a7567-b576-49d7-a613-e357bf119324)
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8250] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8314] manager[0x5601b98d4080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8352] hostname: hostname: using hostnamed
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8353] hostname: static hostname changed from (none) to "np0005548915.novalocal"
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8357] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8447] manager[0x5601b98d4080]: rfkill: Wi-Fi hardware radio set enabled
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8450] manager[0x5601b98d4080]: rfkill: WWAN hardware radio set enabled
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8491] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8493] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8493] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8494] manager: Networking is enabled by state file
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8496] settings: Loaded settings plugin: keyfile (internal)
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8507] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8524] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8536] dhcp: init: Using DHCP client 'internal'
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8539] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  6 04:00:34 np0005548915 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8552] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8563] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8571] device (lo): Activation: starting connection 'lo' (40483b14-1904-462e-975f-deec93e74606)
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8579] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8582] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8613] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8616] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8618] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8619] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8621] device (eth0): carrier: link connected
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8622] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8627] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8633] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8636] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8636] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8638] manager: NetworkManager state is now CONNECTING
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8639] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8643] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8645] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  6 04:00:34 np0005548915 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  6 04:00:34 np0005548915 systemd[1]: Started Network Manager.
Dec  6 04:00:34 np0005548915 systemd[1]: Reached target Network.
Dec  6 04:00:34 np0005548915 systemd[1]: Starting Network Manager Wait Online...
Dec  6 04:00:34 np0005548915 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  6 04:00:34 np0005548915 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8949] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8953] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  6 04:00:34 np0005548915 NetworkManager[855]: <info>  [1765011634.8961] device (lo): Activation: successful, device activated.
Dec  6 04:00:34 np0005548915 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  6 04:00:34 np0005548915 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  6 04:00:34 np0005548915 systemd[1]: Reached target NFS client services.
Dec  6 04:00:34 np0005548915 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  6 04:00:34 np0005548915 systemd[1]: Reached target Remote File Systems.
Dec  6 04:00:34 np0005548915 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  6 04:00:35 np0005548915 NetworkManager[855]: <info>  [1765011635.9142] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Dec  6 04:00:35 np0005548915 NetworkManager[855]: <info>  [1765011635.9156] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  6 04:00:35 np0005548915 NetworkManager[855]: <info>  [1765011635.9175] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 04:00:35 np0005548915 NetworkManager[855]: <info>  [1765011635.9210] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 04:00:35 np0005548915 NetworkManager[855]: <info>  [1765011635.9212] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 04:00:35 np0005548915 NetworkManager[855]: <info>  [1765011635.9214] manager: NetworkManager state is now CONNECTED_SITE
Dec  6 04:00:35 np0005548915 NetworkManager[855]: <info>  [1765011635.9217] device (eth0): Activation: successful, device activated.
Dec  6 04:00:35 np0005548915 NetworkManager[855]: <info>  [1765011635.9220] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  6 04:00:35 np0005548915 NetworkManager[855]: <info>  [1765011635.9222] manager: startup complete
Dec  6 04:00:35 np0005548915 systemd[1]: Finished Network Manager Wait Online.
Dec  6 04:00:35 np0005548915 systemd[1]: Starting Cloud-init: Network Stage...
Dec  6 04:00:36 np0005548915 cloud-init[919]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 06 Dec 2025 09:00:36 +0000. Up 7.95 seconds.
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: |  eth0  | True |         38.102.83.27         | 255.255.255.0 | global | fa:16:3e:87:1e:0a |
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: |  eth0  | True | fe80::f816:3eff:fe87:1e0a/64 |       .       |  link  | fa:16:3e:87:1e:0a |
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Dec  6 04:00:36 np0005548915 cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  6 04:00:37 np0005548915 cloud-init[919]: Generating public/private rsa key pair.
Dec  6 04:00:37 np0005548915 cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  6 04:00:37 np0005548915 cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  6 04:00:37 np0005548915 cloud-init[919]: The key fingerprint is:
Dec  6 04:00:37 np0005548915 cloud-init[919]: SHA256:4HsYvYZbsIZrlt259ZE1uctiG5AJnDk0WaQERd6R7L8 root@np0005548915.novalocal
Dec  6 04:00:37 np0005548915 cloud-init[919]: The key's randomart image is:
Dec  6 04:00:37 np0005548915 cloud-init[919]: +---[RSA 3072]----+
Dec  6 04:00:37 np0005548915 cloud-init[919]: |      .+*=+.     |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |       =.*o.     |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |      . O..      |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |     . o o.o   . |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |      + S +.  +  |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |     . B . ..o o |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |    .o*.=.. +..  |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |    +o.=o. .E+ . |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |   o. . .. .ooo  |
Dec  6 04:00:37 np0005548915 cloud-init[919]: +----[SHA256]-----+
Dec  6 04:00:37 np0005548915 cloud-init[919]: Generating public/private ecdsa key pair.
Dec  6 04:00:37 np0005548915 cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  6 04:00:37 np0005548915 cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  6 04:00:37 np0005548915 cloud-init[919]: The key fingerprint is:
Dec  6 04:00:37 np0005548915 cloud-init[919]: SHA256:drQfY72BKLu6JfujwiJ6Z3btUKPajzO/8+eGKz1qKbY root@np0005548915.novalocal
Dec  6 04:00:37 np0005548915 cloud-init[919]: The key's randomart image is:
Dec  6 04:00:37 np0005548915 cloud-init[919]: +---[ECDSA 256]---+
Dec  6 04:00:37 np0005548915 cloud-init[919]: |                 |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |                 |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |          .      |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |         . o o   |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |        S + = o  |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |       + = o o o |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |   .  +.oo .. .  |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |. o *o**B.+ o    |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |oo =.+E#OBo*.    |
Dec  6 04:00:37 np0005548915 cloud-init[919]: +----[SHA256]-----+
Dec  6 04:00:37 np0005548915 cloud-init[919]: Generating public/private ed25519 key pair.
Dec  6 04:00:37 np0005548915 cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  6 04:00:37 np0005548915 cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  6 04:00:37 np0005548915 cloud-init[919]: The key fingerprint is:
Dec  6 04:00:37 np0005548915 cloud-init[919]: SHA256:87UdsGwqdSjB54ua/WT855Du5izy3dQjhLewGPTXI4k root@np0005548915.novalocal
Dec  6 04:00:37 np0005548915 cloud-init[919]: The key's randomart image is:
Dec  6 04:00:37 np0005548915 cloud-init[919]: +--[ED25519 256]--+
Dec  6 04:00:37 np0005548915 cloud-init[919]: |                 |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |       .         |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |        o.. .    |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |        .+.oooo  |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |        S.+E**.o |
Dec  6 04:00:37 np0005548915 cloud-init[919]: |         Bo**+oo.|
Dec  6 04:00:37 np0005548915 cloud-init[919]: |        o.B.+oo..|
Dec  6 04:00:37 np0005548915 cloud-init[919]: |       +.+.=.+o .|
Dec  6 04:00:37 np0005548915 cloud-init[919]: |      o .+o=Boo  |
Dec  6 04:00:37 np0005548915 cloud-init[919]: +----[SHA256]-----+
Dec  6 04:00:37 np0005548915 systemd[1]: Finished Cloud-init: Network Stage.
Dec  6 04:00:37 np0005548915 systemd[1]: Reached target Cloud-config availability.
Dec  6 04:00:37 np0005548915 systemd[1]: Reached target Network is Online.
Dec  6 04:00:37 np0005548915 systemd[1]: Starting Cloud-init: Config Stage...
Dec  6 04:00:37 np0005548915 systemd[1]: Starting Crash recovery kernel arming...
Dec  6 04:00:37 np0005548915 systemd[1]: Starting Notify NFS peers of a restart...
Dec  6 04:00:37 np0005548915 systemd[1]: Starting System Logging Service...
Dec  6 04:00:37 np0005548915 systemd[1]: Starting OpenSSH server daemon...
Dec  6 04:00:37 np0005548915 systemd[1]: Starting Permit User Sessions...
Dec  6 04:00:37 np0005548915 sm-notify[1003]: Version 2.5.4 starting
Dec  6 04:00:37 np0005548915 systemd[1]: Started Notify NFS peers of a restart.
Dec  6 04:00:37 np0005548915 systemd[1]: Finished Permit User Sessions.
Dec  6 04:00:37 np0005548915 systemd[1]: Started OpenSSH server daemon.
Dec  6 04:00:37 np0005548915 systemd[1]: Started Command Scheduler.
Dec  6 04:00:37 np0005548915 systemd[1]: Started Getty on tty1.
Dec  6 04:00:37 np0005548915 systemd[1]: Started Serial Getty on ttyS0.
Dec  6 04:00:37 np0005548915 systemd[1]: Reached target Login Prompts.
Dec  6 04:00:37 np0005548915 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Dec  6 04:00:37 np0005548915 rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  6 04:00:37 np0005548915 systemd[1]: Started System Logging Service.
Dec  6 04:00:37 np0005548915 systemd[1]: Reached target Multi-User System.
Dec  6 04:00:37 np0005548915 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  6 04:00:37 np0005548915 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  6 04:00:37 np0005548915 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  6 04:00:37 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 04:00:37 np0005548915 kdumpctl[1017]: kdump: No kdump initial ramdisk found.
Dec  6 04:00:37 np0005548915 kdumpctl[1017]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec  6 04:00:37 np0005548915 cloud-init[1136]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 06 Dec 2025 09:00:37 +0000. Up 9.43 seconds.
Dec  6 04:00:37 np0005548915 systemd[1]: Finished Cloud-init: Config Stage.
Dec  6 04:00:37 np0005548915 systemd[1]: Starting Cloud-init: Final Stage...
Dec  6 04:00:38 np0005548915 dracut[1264]: dracut-057-102.git20250818.el9
Dec  6 04:00:38 np0005548915 dracut[1266]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec  6 04:00:38 np0005548915 cloud-init[1298]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 06 Dec 2025 09:00:38 +0000. Up 9.86 seconds.
Dec  6 04:00:38 np0005548915 cloud-init[1327]: #############################################################
Dec  6 04:00:38 np0005548915 cloud-init[1330]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  6 04:00:38 np0005548915 cloud-init[1337]: 256 SHA256:drQfY72BKLu6JfujwiJ6Z3btUKPajzO/8+eGKz1qKbY root@np0005548915.novalocal (ECDSA)
Dec  6 04:00:38 np0005548915 cloud-init[1341]: 256 SHA256:87UdsGwqdSjB54ua/WT855Du5izy3dQjhLewGPTXI4k root@np0005548915.novalocal (ED25519)
Dec  6 04:00:38 np0005548915 cloud-init[1343]: 3072 SHA256:4HsYvYZbsIZrlt259ZE1uctiG5AJnDk0WaQERd6R7L8 root@np0005548915.novalocal (RSA)
Dec  6 04:00:38 np0005548915 cloud-init[1344]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  6 04:00:38 np0005548915 cloud-init[1345]: #############################################################
Dec  6 04:00:38 np0005548915 cloud-init[1298]: Cloud-init v. 24.4-7.el9 finished at Sat, 06 Dec 2025 09:00:38 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.04 seconds
Dec  6 04:00:38 np0005548915 systemd[1]: Finished Cloud-init: Final Stage.
Dec  6 04:00:38 np0005548915 systemd[1]: Reached target Cloud-init target.
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  6 04:00:38 np0005548915 dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: memstrack is not available
Dec  6 04:00:39 np0005548915 dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  6 04:00:39 np0005548915 dracut[1266]: memstrack is not available
Dec  6 04:00:39 np0005548915 dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  6 04:00:39 np0005548915 dracut[1266]: *** Including module: systemd ***
Dec  6 04:00:39 np0005548915 dracut[1266]: *** Including module: fips ***
Dec  6 04:00:40 np0005548915 dracut[1266]: *** Including module: systemd-initrd ***
Dec  6 04:00:40 np0005548915 dracut[1266]: *** Including module: i18n ***
Dec  6 04:00:40 np0005548915 dracut[1266]: *** Including module: drm ***
Dec  6 04:00:40 np0005548915 dracut[1266]: *** Including module: prefixdevname ***
Dec  6 04:00:40 np0005548915 dracut[1266]: *** Including module: kernel-modules ***
Dec  6 04:00:40 np0005548915 kernel: block vda: the capability attribute has been deprecated.
Dec  6 04:00:41 np0005548915 chronyd[778]: Selected source 174.142.148.226 (2.centos.pool.ntp.org)
Dec  6 04:00:41 np0005548915 chronyd[778]: System clock wrong by 1.153678 seconds
Dec  6 04:00:41 np0005548915 chronyd[778]: System clock was stepped by 1.153678 seconds
Dec  6 04:00:41 np0005548915 chronyd[778]: System clock TAI offset set to 37 seconds
Dec  6 04:00:42 np0005548915 dracut[1266]: *** Including module: kernel-modules-extra ***
Dec  6 04:00:42 np0005548915 dracut[1266]: *** Including module: qemu ***
Dec  6 04:00:42 np0005548915 dracut[1266]: *** Including module: fstab-sys ***
Dec  6 04:00:42 np0005548915 dracut[1266]: *** Including module: rootfs-block ***
Dec  6 04:00:42 np0005548915 dracut[1266]: *** Including module: terminfo ***
Dec  6 04:00:42 np0005548915 dracut[1266]: *** Including module: udev-rules ***
Dec  6 04:00:43 np0005548915 dracut[1266]: Skipping udev rule: 91-permissions.rules
Dec  6 04:00:43 np0005548915 dracut[1266]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  6 04:00:43 np0005548915 dracut[1266]: *** Including module: virtiofs ***
Dec  6 04:00:43 np0005548915 dracut[1266]: *** Including module: dracut-systemd ***
Dec  6 04:00:43 np0005548915 dracut[1266]: *** Including module: usrmount ***
Dec  6 04:00:43 np0005548915 dracut[1266]: *** Including module: base ***
Dec  6 04:00:43 np0005548915 dracut[1266]: *** Including module: fs-lib ***
Dec  6 04:00:43 np0005548915 dracut[1266]: *** Including module: kdumpbase ***
Dec  6 04:00:43 np0005548915 dracut[1266]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  6 04:00:43 np0005548915 dracut[1266]:  microcode_ctl module: mangling fw_dir
Dec  6 04:00:43 np0005548915 dracut[1266]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  6 04:00:43 np0005548915 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  6 04:00:43 np0005548915 dracut[1266]:    microcode_ctl: configuration "intel" is ignored
Dec  6 04:00:43 np0005548915 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  6 04:00:43 np0005548915 dracut[1266]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  6 04:00:43 np0005548915 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  6 04:00:44 np0005548915 dracut[1266]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  6 04:00:44 np0005548915 dracut[1266]: *** Including module: openssl ***
Dec  6 04:00:44 np0005548915 dracut[1266]: *** Including module: shutdown ***
Dec  6 04:00:44 np0005548915 dracut[1266]: *** Including module: squash ***
Dec  6 04:00:44 np0005548915 dracut[1266]: *** Including modules done ***
Dec  6 04:00:44 np0005548915 dracut[1266]: *** Installing kernel module dependencies ***
Dec  6 04:00:45 np0005548915 dracut[1266]: *** Installing kernel module dependencies done ***
Dec  6 04:00:45 np0005548915 dracut[1266]: *** Resolving executable dependencies ***
Dec  6 04:00:45 np0005548915 irqbalance[788]: Cannot change IRQ 25 affinity: Operation not permitted
Dec  6 04:00:45 np0005548915 irqbalance[788]: IRQ 25 affinity is now unmanaged
Dec  6 04:00:45 np0005548915 irqbalance[788]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  6 04:00:45 np0005548915 irqbalance[788]: IRQ 31 affinity is now unmanaged
Dec  6 04:00:45 np0005548915 irqbalance[788]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  6 04:00:45 np0005548915 irqbalance[788]: IRQ 28 affinity is now unmanaged
Dec  6 04:00:45 np0005548915 irqbalance[788]: Cannot change IRQ 32 affinity: Operation not permitted
Dec  6 04:00:45 np0005548915 irqbalance[788]: IRQ 32 affinity is now unmanaged
Dec  6 04:00:45 np0005548915 irqbalance[788]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  6 04:00:45 np0005548915 irqbalance[788]: IRQ 30 affinity is now unmanaged
Dec  6 04:00:45 np0005548915 irqbalance[788]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  6 04:00:45 np0005548915 irqbalance[788]: IRQ 29 affinity is now unmanaged
Dec  6 04:00:46 np0005548915 dracut[1266]: *** Resolving executable dependencies done ***
Dec  6 04:00:46 np0005548915 dracut[1266]: *** Generating early-microcode cpio image ***
Dec  6 04:00:46 np0005548915 dracut[1266]: *** Store current command line parameters ***
Dec  6 04:00:46 np0005548915 dracut[1266]: Stored kernel commandline:
Dec  6 04:00:46 np0005548915 dracut[1266]: No dracut internal kernel commandline stored in the initramfs
Dec  6 04:00:46 np0005548915 dracut[1266]: *** Install squash loader ***
Dec  6 04:00:47 np0005548915 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  6 04:00:48 np0005548915 dracut[1266]: *** Squashing the files inside the initramfs ***
Dec  6 04:00:49 np0005548915 dracut[1266]: *** Squashing the files inside the initramfs done ***
Dec  6 04:00:49 np0005548915 dracut[1266]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec  6 04:00:49 np0005548915 dracut[1266]: *** Hardlinking files ***
Dec  6 04:00:49 np0005548915 dracut[1266]: *** Hardlinking files done ***
Dec  6 04:00:49 np0005548915 dracut[1266]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec  6 04:00:50 np0005548915 kdumpctl[1017]: kdump: kexec: loaded kdump kernel
Dec  6 04:00:50 np0005548915 kdumpctl[1017]: kdump: Starting kdump: [OK]
Dec  6 04:00:50 np0005548915 systemd[1]: Finished Crash recovery kernel arming.
Dec  6 04:00:50 np0005548915 systemd[1]: Startup finished in 1.595s (kernel) + 2.408s (initrd) + 16.678s (userspace) = 20.682s.
Dec  6 04:00:54 np0005548915 systemd-logind[795]: New session 1 of user zuul.
Dec  6 04:00:54 np0005548915 systemd[1]: Created slice User Slice of UID 1000.
Dec  6 04:00:54 np0005548915 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  6 04:00:54 np0005548915 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  6 04:00:54 np0005548915 systemd[1]: Starting User Manager for UID 1000...
Dec  6 04:00:54 np0005548915 systemd[4299]: Queued start job for default target Main User Target.
Dec  6 04:00:54 np0005548915 systemd[4299]: Created slice User Application Slice.
Dec  6 04:00:54 np0005548915 systemd[4299]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  6 04:00:54 np0005548915 systemd[4299]: Started Daily Cleanup of User's Temporary Directories.
Dec  6 04:00:54 np0005548915 systemd[4299]: Reached target Paths.
Dec  6 04:00:54 np0005548915 systemd[4299]: Reached target Timers.
Dec  6 04:00:54 np0005548915 systemd[4299]: Starting D-Bus User Message Bus Socket...
Dec  6 04:00:54 np0005548915 systemd[4299]: Starting Create User's Volatile Files and Directories...
Dec  6 04:00:54 np0005548915 systemd[4299]: Listening on D-Bus User Message Bus Socket.
Dec  6 04:00:54 np0005548915 systemd[4299]: Reached target Sockets.
Dec  6 04:00:54 np0005548915 systemd[4299]: Finished Create User's Volatile Files and Directories.
Dec  6 04:00:54 np0005548915 systemd[4299]: Reached target Basic System.
Dec  6 04:00:54 np0005548915 systemd[4299]: Reached target Main User Target.
Dec  6 04:00:54 np0005548915 systemd[4299]: Startup finished in 109ms.
Dec  6 04:00:54 np0005548915 systemd[1]: Started User Manager for UID 1000.
Dec  6 04:00:54 np0005548915 systemd[1]: Started Session 1 of User zuul.
Dec  6 04:00:55 np0005548915 python3[4381]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:00:57 np0005548915 python3[4409]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:01:05 np0005548915 python3[4482]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:01:06 np0005548915 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  6 04:01:06 np0005548915 python3[4524]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  6 04:01:08 np0005548915 python3[4550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDU0JPqo3RlcbkISWeWyZyh8N1DipPCXKbgbj83sLrBXd5pRLoLdbqBjiuLvFfP7lb5gET6+eP3VZiOMI6UHmEm8ynKQRTIQ7lxC6wlJ/5bEkQ7shEony5Dt8S+/YriKnW8SR/bfYJwGVDGiYwX9+YLTEkgtaWYCW5aOhF1JYR2fNVZQyTaBuiZFc/j1+ce31wCfSAIAFETx4TP71KVZET/mDhOPfYQSE6dNJCcZnohKVSa1SHNL0bVxbehOrQrmqmiRc81piGO4LAMvuSM3op7QTjc7lDDNoYX/DWm/O6Yd8IV5PAI5jAYm4zViXyj8K/iPfclSAUCutpd/HwsQjjiI9Ei0ObVrpLhV3PWw6UkMmfRl4sN90Bhg/95I6taoeEDSSNojukndyGr3lxM1SkEHO0ZamuvQmAOsP05x89hsZFP9E+RntviBPqrCNyyiE7JEy2H1WfIK5i0KA/BC8M+osytKOc1zBu/jI4TYPr32yUNd7mIBDzpNaUok32L4Pk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:09 np0005548915 python3[4574]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:09 np0005548915 python3[4673]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:01:09 np0005548915 python3[4744]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765011669.3883896-251-259234522709630/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=66d341c321a043af9793d30ca9726f09_id_rsa follow=False checksum=1c48fa8bdbec038bf9f0f4b497dca115d790ad66 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:10 np0005548915 python3[4867]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:01:10 np0005548915 python3[4938]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765011670.2597892-306-204397967249075/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=66d341c321a043af9793d30ca9726f09_id_rsa.pub follow=False checksum=e7cbe2647d02b25f8aa52dd3d3a0ea1aa1cad833 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:12 np0005548915 python3[4986]: ansible-ping Invoked with data=pong
Dec  6 04:01:13 np0005548915 python3[5010]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:01:15 np0005548915 python3[5068]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  6 04:01:16 np0005548915 python3[5100]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:16 np0005548915 python3[5124]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:17 np0005548915 python3[5148]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:17 np0005548915 python3[5172]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:17 np0005548915 python3[5196]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:18 np0005548915 python3[5220]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:19 np0005548915 python3[5246]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:20 np0005548915 python3[5324]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:01:20 np0005548915 python3[5397]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765011680.0325034-31-121883593708449/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:21 np0005548915 python3[5445]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:21 np0005548915 python3[5469]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:22 np0005548915 python3[5493]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:22 np0005548915 python3[5517]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:22 np0005548915 python3[5541]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:22 np0005548915 python3[5565]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:23 np0005548915 python3[5589]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:23 np0005548915 python3[5613]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:23 np0005548915 python3[5637]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:23 np0005548915 python3[5661]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:24 np0005548915 python3[5685]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:24 np0005548915 python3[5709]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:24 np0005548915 python3[5733]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:25 np0005548915 python3[5757]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:25 np0005548915 python3[5781]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:25 np0005548915 python3[5805]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:25 np0005548915 python3[5829]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:26 np0005548915 python3[5853]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:26 np0005548915 python3[5877]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:26 np0005548915 python3[5901]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:26 np0005548915 python3[5925]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:27 np0005548915 python3[5949]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:27 np0005548915 python3[5973]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:27 np0005548915 python3[5997]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:27 np0005548915 python3[6021]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:28 np0005548915 python3[6045]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:01:31 np0005548915 python3[6071]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  6 04:01:31 np0005548915 systemd[1]: Starting Time & Date Service...
Dec  6 04:01:31 np0005548915 systemd[1]: Started Time & Date Service.
Dec  6 04:01:31 np0005548915 systemd-timedated[6073]: Changed time zone to 'UTC' (UTC).
Dec  6 04:01:32 np0005548915 python3[6102]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:32 np0005548915 python3[6178]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:01:32 np0005548915 python3[6249]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765011692.3971589-251-23918897575888/source _original_basename=tmphdeuo2bx follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:33 np0005548915 python3[6349]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:01:33 np0005548915 python3[6420]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765011693.3040066-301-74495999743567/source _original_basename=tmpa9hbw3x4 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:34 np0005548915 python3[6522]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:01:35 np0005548915 python3[6595]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765011694.6163456-381-82365141350240/source _original_basename=tmpkt__clge follow=False checksum=e37e58be433a53918a64d1ef12dfc1e7d01516d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:35 np0005548915 python3[6643]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:01:36 np0005548915 python3[6669]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:01:36 np0005548915 python3[6749]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:01:36 np0005548915 python3[6822]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765011696.3556497-451-150350129668605/source _original_basename=tmpvcls7vjg follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:37 np0005548915 python3[6873]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-c2c1-5ee8-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:01:38 np0005548915 python3[6901]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-c2c1-5ee8-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  6 04:01:39 np0005548915 python3[6929]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:01:58 np0005548915 python3[6955]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:02:01 np0005548915 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  6 04:02:41 np0005548915 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  6 04:02:41 np0005548915 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  6 04:02:41 np0005548915 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  6 04:02:41 np0005548915 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  6 04:02:41 np0005548915 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  6 04:02:41 np0005548915 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  6 04:02:41 np0005548915 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  6 04:02:41 np0005548915 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  6 04:02:41 np0005548915 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  6 04:02:41 np0005548915 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  6 04:02:41 np0005548915 NetworkManager[855]: <info>  [1765011761.1791] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  6 04:02:41 np0005548915 systemd-udevd[6959]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 04:02:41 np0005548915 NetworkManager[855]: <info>  [1765011761.1928] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 04:02:41 np0005548915 NetworkManager[855]: <info>  [1765011761.1953] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  6 04:02:41 np0005548915 NetworkManager[855]: <info>  [1765011761.1958] device (eth1): carrier: link connected
Dec  6 04:02:41 np0005548915 NetworkManager[855]: <info>  [1765011761.1960] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  6 04:02:41 np0005548915 NetworkManager[855]: <info>  [1765011761.1966] policy: auto-activating connection 'Wired connection 1' (801d2662-229c-3ec2-ab7b-8017b4489ad7)
Dec  6 04:02:41 np0005548915 NetworkManager[855]: <info>  [1765011761.1971] device (eth1): Activation: starting connection 'Wired connection 1' (801d2662-229c-3ec2-ab7b-8017b4489ad7)
Dec  6 04:02:41 np0005548915 NetworkManager[855]: <info>  [1765011761.1972] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:02:41 np0005548915 NetworkManager[855]: <info>  [1765011761.1976] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:02:41 np0005548915 NetworkManager[855]: <info>  [1765011761.1981] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:02:41 np0005548915 NetworkManager[855]: <info>  [1765011761.1986] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  6 04:02:42 np0005548915 python3[6985]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-5a9f-9569-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:02:52 np0005548915 python3[7065]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:02:52 np0005548915 python3[7138]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765011771.7652702-104-183839944671597/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=009003d5d114e1477e06615c5dca6e1028e76f02 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:02:53 np0005548915 python3[7188]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:02:53 np0005548915 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  6 04:02:53 np0005548915 systemd[1]: Stopped Network Manager Wait Online.
Dec  6 04:02:53 np0005548915 systemd[1]: Stopping Network Manager Wait Online...
Dec  6 04:02:53 np0005548915 NetworkManager[855]: <info>  [1765011773.2541] caught SIGTERM, shutting down normally.
Dec  6 04:02:53 np0005548915 systemd[1]: Stopping Network Manager...
Dec  6 04:02:53 np0005548915 NetworkManager[855]: <info>  [1765011773.2550] dhcp4 (eth0): canceled DHCP transaction
Dec  6 04:02:53 np0005548915 NetworkManager[855]: <info>  [1765011773.2550] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  6 04:02:53 np0005548915 NetworkManager[855]: <info>  [1765011773.2550] dhcp4 (eth0): state changed no lease
Dec  6 04:02:53 np0005548915 NetworkManager[855]: <info>  [1765011773.2554] manager: NetworkManager state is now CONNECTING
Dec  6 04:02:53 np0005548915 NetworkManager[855]: <info>  [1765011773.2705] dhcp4 (eth1): canceled DHCP transaction
Dec  6 04:02:53 np0005548915 NetworkManager[855]: <info>  [1765011773.2705] dhcp4 (eth1): state changed no lease
Dec  6 04:02:53 np0005548915 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  6 04:02:53 np0005548915 NetworkManager[855]: <info>  [1765011773.2762] exiting (success)
Dec  6 04:02:53 np0005548915 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  6 04:02:53 np0005548915 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  6 04:02:53 np0005548915 systemd[1]: Stopped Network Manager.
Dec  6 04:02:53 np0005548915 systemd[1]: Starting Network Manager...
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.3419] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:eb1a7567-b576-49d7-a613-e357bf119324)
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.3422] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.3474] manager[0x555cfe7a2070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  6 04:02:53 np0005548915 systemd[1]: Starting Hostname Service...
Dec  6 04:02:53 np0005548915 systemd[1]: Started Hostname Service.
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4245] hostname: hostname: using hostnamed
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4246] hostname: static hostname changed from (none) to "np0005548915.novalocal"
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4252] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4259] manager[0x555cfe7a2070]: rfkill: Wi-Fi hardware radio set enabled
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4259] manager[0x555cfe7a2070]: rfkill: WWAN hardware radio set enabled
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4291] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4291] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4292] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4293] manager: Networking is enabled by state file
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4296] settings: Loaded settings plugin: keyfile (internal)
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4300] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4325] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4333] dhcp: init: Using DHCP client 'internal'
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4336] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4340] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4345] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4352] device (lo): Activation: starting connection 'lo' (40483b14-1904-462e-975f-deec93e74606)
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4358] device (eth0): carrier: link connected
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4362] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4366] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4366] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4371] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4377] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4382] device (eth1): carrier: link connected
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4386] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4390] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (801d2662-229c-3ec2-ab7b-8017b4489ad7) (indicated)
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4390] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4395] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4400] device (eth1): Activation: starting connection 'Wired connection 1' (801d2662-229c-3ec2-ab7b-8017b4489ad7)
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4407] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  6 04:02:53 np0005548915 systemd[1]: Started Network Manager.
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4411] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4412] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4414] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4416] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4418] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4420] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4422] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4424] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4429] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4431] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4438] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4440] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4463] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4468] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4529] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4537] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4538] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4544] device (lo): Activation: successful, device activated.
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4565] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4567] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4570] manager: NetworkManager state is now CONNECTED_SITE
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4574] device (eth0): Activation: successful, device activated.
Dec  6 04:02:53 np0005548915 NetworkManager[7201]: <info>  [1765011773.4579] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  6 04:02:53 np0005548915 systemd[1]: Starting Network Manager Wait Online...
Dec  6 04:02:53 np0005548915 python3[7272]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-5a9f-9569-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:03:03 np0005548915 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  6 04:03:23 np0005548915 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.4583] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  6 04:03:38 np0005548915 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  6 04:03:38 np0005548915 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5026] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5031] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5043] device (eth1): Activation: successful, device activated.
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5055] manager: startup complete
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5059] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <warn>  [1765011818.5070] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5082] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  6 04:03:38 np0005548915 systemd[1]: Finished Network Manager Wait Online.
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5185] dhcp4 (eth1): canceled DHCP transaction
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5187] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5189] dhcp4 (eth1): state changed no lease
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5217] policy: auto-activating connection 'ci-private-network' (6151fa65-6cef-549f-91ba-9f68f8a2cb73)
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5225] device (eth1): Activation: starting connection 'ci-private-network' (6151fa65-6cef-549f-91ba-9f68f8a2cb73)
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5228] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5233] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5245] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5262] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5308] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5312] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 04:03:38 np0005548915 NetworkManager[7201]: <info>  [1765011818.5323] device (eth1): Activation: successful, device activated.
Dec  6 04:03:48 np0005548915 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  6 04:03:51 np0005548915 systemd[4299]: Starting Mark boot as successful...
Dec  6 04:03:51 np0005548915 systemd[4299]: Finished Mark boot as successful.
Dec  6 04:03:53 np0005548915 systemd-logind[795]: Session 1 logged out. Waiting for processes to exit.
Dec  6 04:04:51 np0005548915 systemd-logind[795]: New session 3 of user zuul.
Dec  6 04:04:51 np0005548915 systemd[1]: Started Session 3 of User zuul.
Dec  6 04:04:51 np0005548915 python3[7382]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:04:51 np0005548915 python3[7455]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765011891.135714-373-108233033013812/source _original_basename=tmptpftk2ug follow=False checksum=81d87914000d1f03e4ba3a0a6e4eda468c65f433 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:04:55 np0005548915 systemd[1]: session-3.scope: Deactivated successfully.
Dec  6 04:04:55 np0005548915 systemd-logind[795]: Session 3 logged out. Waiting for processes to exit.
Dec  6 04:04:55 np0005548915 systemd-logind[795]: Removed session 3.
Dec  6 04:06:51 np0005548915 systemd[4299]: Created slice User Background Tasks Slice.
Dec  6 04:06:51 np0005548915 systemd[4299]: Starting Cleanup of User's Temporary Files and Directories...
Dec  6 04:06:51 np0005548915 systemd[4299]: Finished Cleanup of User's Temporary Files and Directories.
Dec  6 04:10:24 np0005548915 systemd-logind[795]: New session 4 of user zuul.
Dec  6 04:10:24 np0005548915 systemd[1]: Started Session 4 of User zuul.
Dec  6 04:10:24 np0005548915 python3[7515]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-6aeb-b52e-000000001cd4-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:10:25 np0005548915 python3[7544]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:10:25 np0005548915 python3[7570]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:10:25 np0005548915 python3[7596]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:10:25 np0005548915 python3[7622]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:10:26 np0005548915 python3[7648]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:10:27 np0005548915 python3[7726]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:10:27 np0005548915 python3[7799]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765012227.1483302-516-255246545221629/source _original_basename=tmp5rewaca3 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:10:28 np0005548915 python3[7849]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 04:10:29 np0005548915 systemd[1]: Reloading.
Dec  6 04:10:29 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:10:30 np0005548915 python3[7905]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  6 04:10:31 np0005548915 python3[7931]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:10:31 np0005548915 python3[7959]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:10:32 np0005548915 python3[7987]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:10:32 np0005548915 python3[8015]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:10:33 np0005548915 python3[8042]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-6aeb-b52e-000000001cdb-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:10:33 np0005548915 python3[8072]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  6 04:10:36 np0005548915 systemd[1]: session-4.scope: Deactivated successfully.
Dec  6 04:10:36 np0005548915 systemd[1]: session-4.scope: Consumed 3.749s CPU time.
Dec  6 04:10:36 np0005548915 systemd-logind[795]: Session 4 logged out. Waiting for processes to exit.
Dec  6 04:10:36 np0005548915 systemd-logind[795]: Removed session 4.
Dec  6 04:10:38 np0005548915 systemd-logind[795]: New session 5 of user zuul.
Dec  6 04:10:38 np0005548915 systemd[1]: Started Session 5 of User zuul.
Dec  6 04:10:38 np0005548915 python3[8106]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  6 04:10:54 np0005548915 kernel: SELinux:  Converting 386 SID table entries...
Dec  6 04:10:54 np0005548915 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 04:10:54 np0005548915 kernel: SELinux:  policy capability open_perms=1
Dec  6 04:10:54 np0005548915 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 04:10:54 np0005548915 kernel: SELinux:  policy capability always_check_network=0
Dec  6 04:10:54 np0005548915 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 04:10:54 np0005548915 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 04:10:54 np0005548915 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 04:11:03 np0005548915 kernel: SELinux:  Converting 386 SID table entries...
Dec  6 04:11:03 np0005548915 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 04:11:03 np0005548915 kernel: SELinux:  policy capability open_perms=1
Dec  6 04:11:03 np0005548915 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 04:11:03 np0005548915 kernel: SELinux:  policy capability always_check_network=0
Dec  6 04:11:03 np0005548915 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 04:11:03 np0005548915 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 04:11:03 np0005548915 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 04:11:12 np0005548915 kernel: SELinux:  Converting 386 SID table entries...
Dec  6 04:11:12 np0005548915 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 04:11:12 np0005548915 kernel: SELinux:  policy capability open_perms=1
Dec  6 04:11:12 np0005548915 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 04:11:12 np0005548915 kernel: SELinux:  policy capability always_check_network=0
Dec  6 04:11:12 np0005548915 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 04:11:12 np0005548915 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 04:11:12 np0005548915 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 04:11:13 np0005548915 setsebool[8172]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  6 04:11:13 np0005548915 setsebool[8172]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  6 04:11:24 np0005548915 kernel: SELinux:  Converting 389 SID table entries...
Dec  6 04:11:24 np0005548915 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 04:11:24 np0005548915 kernel: SELinux:  policy capability open_perms=1
Dec  6 04:11:24 np0005548915 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 04:11:24 np0005548915 kernel: SELinux:  policy capability always_check_network=0
Dec  6 04:11:24 np0005548915 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 04:11:24 np0005548915 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 04:11:24 np0005548915 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 04:11:41 np0005548915 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  6 04:11:41 np0005548915 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 04:11:41 np0005548915 systemd[1]: Starting man-db-cache-update.service...
Dec  6 04:11:41 np0005548915 systemd[1]: Reloading.
Dec  6 04:11:42 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:11:42 np0005548915 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 04:11:45 np0005548915 irqbalance[788]: Cannot change IRQ 27 affinity: Operation not permitted
Dec  6 04:11:45 np0005548915 irqbalance[788]: IRQ 27 affinity is now unmanaged
Dec  6 04:11:47 np0005548915 python3[13464]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-d561-0a5b-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:11:48 np0005548915 kernel: evm: overlay not supported
Dec  6 04:11:48 np0005548915 systemd[4299]: Starting D-Bus User Message Bus...
Dec  6 04:11:48 np0005548915 dbus-broker-launch[14004]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  6 04:11:48 np0005548915 dbus-broker-launch[14004]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  6 04:11:48 np0005548915 systemd[4299]: Started D-Bus User Message Bus.
Dec  6 04:11:48 np0005548915 dbus-broker-lau[14004]: Ready
Dec  6 04:11:48 np0005548915 systemd[4299]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  6 04:11:48 np0005548915 systemd[4299]: Created slice Slice /user.
Dec  6 04:11:48 np0005548915 systemd[4299]: podman-13950.scope: unit configures an IP firewall, but not running as root.
Dec  6 04:11:48 np0005548915 systemd[4299]: (This warning is only shown for the first unit using IP firewalling.)
Dec  6 04:11:48 np0005548915 systemd[4299]: Started podman-13950.scope.
Dec  6 04:11:48 np0005548915 systemd[4299]: Started podman-pause-012e1fba.scope.
Dec  6 04:11:49 np0005548915 systemd[1]: session-5.scope: Deactivated successfully.
Dec  6 04:11:49 np0005548915 systemd[1]: session-5.scope: Consumed 58.509s CPU time.
Dec  6 04:11:49 np0005548915 systemd-logind[795]: Session 5 logged out. Waiting for processes to exit.
Dec  6 04:11:49 np0005548915 systemd-logind[795]: Removed session 5.
Dec  6 04:12:18 np0005548915 systemd-logind[795]: New session 6 of user zuul.
Dec  6 04:12:18 np0005548915 systemd[1]: Started Session 6 of User zuul.
Dec  6 04:12:18 np0005548915 python3[27126]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK/b/hDus+zgErbxpiAu4axJ55LMjNixMhoE4DoEU6Wq/xn30MdVWwMPMhgQamY6n3JqihnzwOz1OzKhBTCdzls= zuul@np0005548914.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:12:19 np0005548915 python3[27363]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK/b/hDus+zgErbxpiAu4axJ55LMjNixMhoE4DoEU6Wq/xn30MdVWwMPMhgQamY6n3JqihnzwOz1OzKhBTCdzls= zuul@np0005548914.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:12:20 np0005548915 python3[27813]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548915.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  6 04:12:20 np0005548915 python3[28062]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK/b/hDus+zgErbxpiAu4axJ55LMjNixMhoE4DoEU6Wq/xn30MdVWwMPMhgQamY6n3JqihnzwOz1OzKhBTCdzls= zuul@np0005548914.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 04:12:21 np0005548915 python3[28392]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:12:21 np0005548915 python3[28713]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765012340.8112967-151-75733541540141/source _original_basename=tmpx9ak07cc follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:12:22 np0005548915 python3[29061]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  6 04:12:22 np0005548915 systemd[1]: Starting Hostname Service...
Dec  6 04:12:22 np0005548915 systemd[1]: Started Hostname Service.
Dec  6 04:12:22 np0005548915 systemd-hostnamed[29068]: Changed pretty hostname to 'compute-0'
Dec  6 04:12:22 np0005548915 systemd-hostnamed[29068]: Hostname set to <compute-0> (static)
Dec  6 04:12:22 np0005548915 NetworkManager[7201]: <info>  [1765012342.8039] hostname: static hostname changed from "np0005548915.novalocal" to "compute-0"
Dec  6 04:12:22 np0005548915 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  6 04:12:22 np0005548915 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  6 04:12:23 np0005548915 systemd[1]: session-6.scope: Deactivated successfully.
Dec  6 04:12:23 np0005548915 systemd[1]: session-6.scope: Consumed 2.143s CPU time.
Dec  6 04:12:23 np0005548915 systemd-logind[795]: Session 6 logged out. Waiting for processes to exit.
Dec  6 04:12:23 np0005548915 systemd-logind[795]: Removed session 6.
Dec  6 04:12:25 np0005548915 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 04:12:25 np0005548915 systemd[1]: Finished man-db-cache-update.service.
Dec  6 04:12:25 np0005548915 systemd[1]: man-db-cache-update.service: Consumed 51.312s CPU time.
Dec  6 04:12:25 np0005548915 systemd[1]: run-rb1d08a7c17d54d82a6dd6b5e414b4676.service: Deactivated successfully.
Dec  6 04:12:32 np0005548915 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  6 04:12:52 np0005548915 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  6 04:15:51 np0005548915 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  6 04:15:51 np0005548915 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  6 04:15:51 np0005548915 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  6 04:15:51 np0005548915 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  6 04:16:07 np0005548915 systemd-logind[795]: New session 7 of user zuul.
Dec  6 04:16:07 np0005548915 systemd[1]: Started Session 7 of User zuul.
Dec  6 04:16:07 np0005548915 python3[29979]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:16:10 np0005548915 python3[30095]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:16:10 np0005548915 python3[30168]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765012570.1579626-33924-160185641293870/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:16:11 np0005548915 python3[30194]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:16:11 np0005548915 python3[30267]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765012570.1579626-33924-160185641293870/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:16:11 np0005548915 python3[30293]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:16:12 np0005548915 python3[30366]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765012570.1579626-33924-160185641293870/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:16:12 np0005548915 python3[30392]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:16:12 np0005548915 python3[30465]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765012570.1579626-33924-160185641293870/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:16:12 np0005548915 python3[30491]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:16:13 np0005548915 python3[30566]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765012570.1579626-33924-160185641293870/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:16:13 np0005548915 python3[30592]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:16:13 np0005548915 python3[30665]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765012570.1579626-33924-160185641293870/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:16:13 np0005548915 python3[30691]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:16:14 np0005548915 python3[30764]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765012570.1579626-33924-160185641293870/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:16:26 np0005548915 python3[30822]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:21:26 np0005548915 systemd[1]: session-7.scope: Deactivated successfully.
Dec  6 04:21:26 np0005548915 systemd[1]: session-7.scope: Consumed 4.753s CPU time.
Dec  6 04:21:26 np0005548915 systemd-logind[795]: Session 7 logged out. Waiting for processes to exit.
Dec  6 04:21:26 np0005548915 systemd-logind[795]: Removed session 7.
Dec  6 04:27:56 np0005548915 systemd-logind[795]: New session 8 of user zuul.
Dec  6 04:27:56 np0005548915 systemd[1]: Started Session 8 of User zuul.
Dec  6 04:27:57 np0005548915 python3.9[30987]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:27:58 np0005548915 python3.9[31168]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:28:06 np0005548915 systemd[1]: session-8.scope: Deactivated successfully.
Dec  6 04:28:06 np0005548915 systemd[1]: session-8.scope: Consumed 7.283s CPU time.
Dec  6 04:28:06 np0005548915 systemd-logind[795]: Session 8 logged out. Waiting for processes to exit.
Dec  6 04:28:06 np0005548915 systemd-logind[795]: Removed session 8.
Dec  6 04:28:22 np0005548915 systemd-logind[795]: New session 9 of user zuul.
Dec  6 04:28:22 np0005548915 systemd[1]: Started Session 9 of User zuul.
Dec  6 04:28:22 np0005548915 python3.9[31378]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  6 04:28:24 np0005548915 python3.9[31552]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:28:25 np0005548915 python3.9[31704]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:28:26 np0005548915 python3.9[31857]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:28:27 np0005548915 python3.9[32009]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:28:27 np0005548915 python3.9[32161]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:28:28 np0005548915 python3.9[32284]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013307.3247175-177-225592901403469/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:28:29 np0005548915 python3.9[32436]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:28:30 np0005548915 python3.9[32592]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:28:31 np0005548915 python3.9[32744]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:28:32 np0005548915 python3.9[32894]: ansible-ansible.builtin.service_facts Invoked
Dec  6 04:28:39 np0005548915 python3.9[33147]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:28:40 np0005548915 python3.9[33297]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:28:41 np0005548915 python3.9[33451]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:28:42 np0005548915 python3.9[33609]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:28:43 np0005548915 python3.9[33693]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:29:24 np0005548915 systemd[1]: Reloading.
Dec  6 04:29:24 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:29:25 np0005548915 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  6 04:29:25 np0005548915 systemd[1]: Reloading.
Dec  6 04:29:25 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:29:25 np0005548915 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  6 04:29:25 np0005548915 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  6 04:29:25 np0005548915 systemd[1]: Reloading.
Dec  6 04:29:25 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:29:26 np0005548915 systemd[1]: Starting dnf makecache...
Dec  6 04:29:26 np0005548915 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  6 04:29:26 np0005548915 dnf[33983]: Failed determining last makecache time.
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-openstack-barbican-42b4c41831408a8e323 129 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  6 04:29:26 np0005548915 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  6 04:29:26 np0005548915 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 128 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-openstack-cinder-1c00d6490d88e436f26ef 147 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-python-stevedore-c4acc5639fd2329372142 177 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-python-cloudkitty-tests-tempest-2c80f8 140 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 150 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 133 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-python-designate-tests-tempest-347fdbc 150 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-openstack-glance-1fd12c29b339f30fe823e 151 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 146 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-openstack-manila-3c01b7181572c95dac462 157 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-python-whitebox-neutron-tests-tempest- 156 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-openstack-octavia-ba397f07a7331190208c 159 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-openstack-watcher-c014f81a8647287f6dcc 164 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-ansible-config_template-5ccaa22121a7ff 157 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 156 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-openstack-swift-dc98a8463506ac520c469a 148 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-python-tempestconf-8515371b7cceebd4282 134 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: delorean-openstack-heat-ui-013accbfd179753bc3f0 132 kB/s | 3.0 kB     00:00
Dec  6 04:29:26 np0005548915 dnf[33983]: CentOS Stream 9 - BaseOS                         77 kB/s | 7.3 kB     00:00
Dec  6 04:29:27 np0005548915 dnf[33983]: CentOS Stream 9 - AppStream                      85 kB/s | 7.4 kB     00:00
Dec  6 04:29:27 np0005548915 dnf[33983]: CentOS Stream 9 - CRB                            30 kB/s | 7.2 kB     00:00
Dec  6 04:29:27 np0005548915 dnf[33983]: CentOS Stream 9 - Extras packages                74 kB/s | 8.3 kB     00:00
Dec  6 04:29:27 np0005548915 dnf[33983]: dlrn-antelope-testing                            92 kB/s | 3.0 kB     00:00
Dec  6 04:29:27 np0005548915 dnf[33983]: dlrn-antelope-build-deps                         94 kB/s | 3.0 kB     00:00
Dec  6 04:29:27 np0005548915 dnf[33983]: centos9-rabbitmq                                 84 kB/s | 3.0 kB     00:00
Dec  6 04:29:27 np0005548915 dnf[33983]: centos9-storage                                 106 kB/s | 3.0 kB     00:00
Dec  6 04:29:27 np0005548915 dnf[33983]: centos9-opstools                                129 kB/s | 3.0 kB     00:00
Dec  6 04:29:27 np0005548915 dnf[33983]: NFV SIG OpenvSwitch                             143 kB/s | 3.0 kB     00:00
Dec  6 04:29:27 np0005548915 dnf[33983]: repo-setup-centos-appstream                     160 kB/s | 4.4 kB     00:00
Dec  6 04:29:27 np0005548915 dnf[33983]: repo-setup-centos-baseos                        168 kB/s | 3.9 kB     00:00
Dec  6 04:29:28 np0005548915 dnf[33983]: repo-setup-centos-highavailability              177 kB/s | 3.9 kB     00:00
Dec  6 04:29:28 np0005548915 dnf[33983]: repo-setup-centos-powertools                     56 kB/s | 4.3 kB     00:00
Dec  6 04:29:28 np0005548915 dnf[33983]: Extra Packages for Enterprise Linux 9 - x86_64  159 kB/s |  32 kB     00:00
Dec  6 04:29:28 np0005548915 dnf[33983]: Metadata cache created.
Dec  6 04:29:28 np0005548915 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  6 04:29:28 np0005548915 systemd[1]: Finished dnf makecache.
Dec  6 04:29:28 np0005548915 systemd[1]: dnf-makecache.service: Consumed 1.844s CPU time.
Dec  6 04:30:31 np0005548915 kernel: SELinux:  Converting 2718 SID table entries...
Dec  6 04:30:31 np0005548915 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 04:30:31 np0005548915 kernel: SELinux:  policy capability open_perms=1
Dec  6 04:30:31 np0005548915 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 04:30:31 np0005548915 kernel: SELinux:  policy capability always_check_network=0
Dec  6 04:30:31 np0005548915 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 04:30:31 np0005548915 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 04:30:31 np0005548915 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 04:30:31 np0005548915 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  6 04:30:32 np0005548915 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 04:30:32 np0005548915 systemd[1]: Starting man-db-cache-update.service...
Dec  6 04:30:32 np0005548915 systemd[1]: Reloading.
Dec  6 04:30:32 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:30:32 np0005548915 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 04:30:33 np0005548915 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 04:30:33 np0005548915 systemd[1]: Finished man-db-cache-update.service.
Dec  6 04:30:33 np0005548915 systemd[1]: man-db-cache-update.service: Consumed 1.327s CPU time.
Dec  6 04:30:33 np0005548915 systemd[1]: run-r3c3800aad8f24a4f90c8931f8ecee67f.service: Deactivated successfully.
Dec  6 04:30:45 np0005548915 python3.9[35264]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:30:48 np0005548915 python3.9[35545]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  6 04:30:49 np0005548915 python3.9[35697]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  6 04:30:54 np0005548915 python3.9[35850]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:30:58 np0005548915 python3.9[36002]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  6 04:31:00 np0005548915 python3.9[36155]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:31:05 np0005548915 irqbalance[788]: Cannot change IRQ 26 affinity: Operation not permitted
Dec  6 04:31:05 np0005548915 irqbalance[788]: IRQ 26 affinity is now unmanaged
Dec  6 04:31:08 np0005548915 python3.9[36307]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:31:09 np0005548915 python3.9[36430]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013464.6928537-666-246355898964676/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=22c202a539af259b977a1afda61dbc1fe0d1039c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:31:11 np0005548915 python3.9[36582]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:31:11 np0005548915 python3.9[36734]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:31:12 np0005548915 python3.9[36887]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:31:14 np0005548915 python3.9[37039]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  6 04:31:15 np0005548915 python3.9[37192]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  6 04:31:15 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 04:31:16 np0005548915 python3.9[37351]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  6 04:31:17 np0005548915 python3.9[37511]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  6 04:31:18 np0005548915 python3.9[37664]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  6 04:31:18 np0005548915 python3.9[37822]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  6 04:31:20 np0005548915 python3.9[37974]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:31:23 np0005548915 python3.9[38127]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:31:24 np0005548915 python3.9[38279]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:31:24 np0005548915 python3.9[38402]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013483.5994058-1023-96510026555493/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:31:25 np0005548915 python3.9[38554]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:31:26 np0005548915 systemd[1]: Starting Load Kernel Modules...
Dec  6 04:31:26 np0005548915 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  6 04:31:26 np0005548915 kernel: Bridge firewalling registered
Dec  6 04:31:26 np0005548915 systemd-modules-load[38558]: Inserted module 'br_netfilter'
Dec  6 04:31:26 np0005548915 systemd[1]: Finished Load Kernel Modules.
Dec  6 04:31:26 np0005548915 python3.9[38713]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:31:27 np0005548915 python3.9[38836]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013486.398648-1092-31399700733310/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:31:29 np0005548915 python3.9[38988]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:31:32 np0005548915 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  6 04:31:32 np0005548915 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  6 04:31:32 np0005548915 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 04:31:33 np0005548915 systemd[1]: Starting man-db-cache-update.service...
Dec  6 04:31:33 np0005548915 systemd[1]: Reloading.
Dec  6 04:31:33 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:31:33 np0005548915 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 04:31:37 np0005548915 python3.9[42349]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:31:37 np0005548915 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 04:31:37 np0005548915 systemd[1]: Finished man-db-cache-update.service.
Dec  6 04:31:37 np0005548915 systemd[1]: man-db-cache-update.service: Consumed 6.071s CPU time.
Dec  6 04:31:37 np0005548915 systemd[1]: run-r845a5114ca674be5a2bfda5f0e15afc2.service: Deactivated successfully.
Dec  6 04:31:38 np0005548915 python3.9[42854]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  6 04:31:38 np0005548915 python3.9[43004]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:31:40 np0005548915 python3.9[43156]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:31:40 np0005548915 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  6 04:31:40 np0005548915 systemd[1]: Starting Authorization Manager...
Dec  6 04:31:40 np0005548915 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  6 04:31:40 np0005548915 polkitd[43373]: Started polkitd version 0.117
Dec  6 04:31:40 np0005548915 systemd[1]: Started Authorization Manager.
Dec  6 04:31:41 np0005548915 python3.9[43543]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:31:41 np0005548915 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  6 04:31:42 np0005548915 systemd[1]: tuned.service: Deactivated successfully.
Dec  6 04:31:42 np0005548915 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  6 04:31:42 np0005548915 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  6 04:31:42 np0005548915 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  6 04:31:43 np0005548915 python3.9[43705]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  6 04:31:46 np0005548915 python3.9[43857]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:31:46 np0005548915 systemd[1]: Reloading.
Dec  6 04:31:46 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:31:47 np0005548915 python3.9[44046]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:31:47 np0005548915 systemd[1]: Reloading.
Dec  6 04:31:48 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:31:49 np0005548915 python3.9[44235]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:31:50 np0005548915 python3.9[44388]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:31:50 np0005548915 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  6 04:31:50 np0005548915 python3.9[44541]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:31:53 np0005548915 python3.9[44703]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:31:54 np0005548915 python3.9[44856]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:31:54 np0005548915 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  6 04:31:54 np0005548915 systemd[1]: Stopped Apply Kernel Variables.
Dec  6 04:31:54 np0005548915 systemd[1]: Stopping Apply Kernel Variables...
Dec  6 04:31:54 np0005548915 systemd[1]: Starting Apply Kernel Variables...
Dec  6 04:31:54 np0005548915 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  6 04:31:54 np0005548915 systemd[1]: Finished Apply Kernel Variables.
Dec  6 04:31:54 np0005548915 systemd[1]: session-9.scope: Deactivated successfully.
Dec  6 04:31:54 np0005548915 systemd[1]: session-9.scope: Consumed 2min 22.195s CPU time.
Dec  6 04:31:54 np0005548915 systemd-logind[795]: Session 9 logged out. Waiting for processes to exit.
Dec  6 04:31:54 np0005548915 systemd-logind[795]: Removed session 9.
Dec  6 04:32:00 np0005548915 systemd-logind[795]: New session 10 of user zuul.
Dec  6 04:32:00 np0005548915 systemd[1]: Started Session 10 of User zuul.
Dec  6 04:32:01 np0005548915 python3.9[45039]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:32:02 np0005548915 python3.9[45195]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  6 04:32:03 np0005548915 python3.9[45348]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  6 04:32:05 np0005548915 python3.9[45506]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  6 04:32:06 np0005548915 python3.9[45666]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:32:07 np0005548915 python3.9[45750]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  6 04:32:14 np0005548915 python3.9[45913]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:32:24 np0005548915 kernel: SELinux:  Converting 2730 SID table entries...
Dec  6 04:32:24 np0005548915 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 04:32:24 np0005548915 kernel: SELinux:  policy capability open_perms=1
Dec  6 04:32:24 np0005548915 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 04:32:24 np0005548915 kernel: SELinux:  policy capability always_check_network=0
Dec  6 04:32:24 np0005548915 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 04:32:24 np0005548915 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 04:32:24 np0005548915 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 04:32:25 np0005548915 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  6 04:32:25 np0005548915 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  6 04:32:26 np0005548915 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 04:32:26 np0005548915 systemd[1]: Starting man-db-cache-update.service...
Dec  6 04:32:26 np0005548915 systemd[1]: Reloading.
Dec  6 04:32:26 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:32:26 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:32:27 np0005548915 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 04:32:27 np0005548915 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 04:32:27 np0005548915 systemd[1]: Finished man-db-cache-update.service.
Dec  6 04:32:27 np0005548915 systemd[1]: man-db-cache-update.service: Consumed 1.009s CPU time.
Dec  6 04:32:27 np0005548915 systemd[1]: run-rf42192781ac443f9a0ad5a9955f0232d.service: Deactivated successfully.
Dec  6 04:32:32 np0005548915 python3.9[47012]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 04:32:34 np0005548915 systemd[1]: Reloading.
Dec  6 04:32:34 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:32:34 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:32:34 np0005548915 systemd[1]: Starting Open vSwitch Database Unit...
Dec  6 04:32:34 np0005548915 chown[47053]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  6 04:32:34 np0005548915 ovs-ctl[47058]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  6 04:32:34 np0005548915 ovs-ctl[47058]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  6 04:32:34 np0005548915 ovs-ctl[47058]: Starting ovsdb-server [  OK  ]
Dec  6 04:32:34 np0005548915 ovs-vsctl[47107]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  6 04:32:34 np0005548915 ovs-vsctl[47123]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"d39b5be8-d4cf-41c7-9a64-1ee03801f4e1\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  6 04:32:34 np0005548915 ovs-ctl[47058]: Configuring Open vSwitch system IDs [  OK  ]
Dec  6 04:32:34 np0005548915 ovs-vsctl[47132]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  6 04:32:34 np0005548915 ovs-ctl[47058]: Enabling remote OVSDB managers [  OK  ]
Dec  6 04:32:34 np0005548915 systemd[1]: Started Open vSwitch Database Unit.
Dec  6 04:32:34 np0005548915 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  6 04:32:34 np0005548915 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  6 04:32:34 np0005548915 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  6 04:32:34 np0005548915 kernel: openvswitch: Open vSwitch switching datapath
Dec  6 04:32:34 np0005548915 ovs-ctl[47178]: Inserting openvswitch module [  OK  ]
Dec  6 04:32:34 np0005548915 ovs-ctl[47147]: Starting ovs-vswitchd [  OK  ]
Dec  6 04:32:35 np0005548915 ovs-ctl[47147]: Enabling remote OVSDB managers [  OK  ]
Dec  6 04:32:35 np0005548915 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  6 04:32:35 np0005548915 ovs-vsctl[47196]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  6 04:32:35 np0005548915 systemd[1]: Starting Open vSwitch...
Dec  6 04:32:35 np0005548915 systemd[1]: Finished Open vSwitch.
Dec  6 04:32:36 np0005548915 python3.9[47347]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:32:37 np0005548915 python3.9[47499]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  6 04:32:38 np0005548915 kernel: SELinux:  Converting 2744 SID table entries...
Dec  6 04:32:38 np0005548915 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 04:32:38 np0005548915 kernel: SELinux:  policy capability open_perms=1
Dec  6 04:32:38 np0005548915 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 04:32:38 np0005548915 kernel: SELinux:  policy capability always_check_network=0
Dec  6 04:32:38 np0005548915 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 04:32:38 np0005548915 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 04:32:38 np0005548915 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 04:32:39 np0005548915 python3.9[47654]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:32:40 np0005548915 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  6 04:32:40 np0005548915 python3.9[47812]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:32:43 np0005548915 python3.9[47965]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:32:45 np0005548915 python3.9[48252]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  6 04:32:46 np0005548915 python3.9[48402]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:32:47 np0005548915 python3.9[48556]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:32:49 np0005548915 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 04:32:49 np0005548915 systemd[1]: Starting man-db-cache-update.service...
Dec  6 04:32:49 np0005548915 systemd[1]: Reloading.
Dec  6 04:32:49 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:32:49 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:32:49 np0005548915 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 04:32:49 np0005548915 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 04:32:49 np0005548915 systemd[1]: Finished man-db-cache-update.service.
Dec  6 04:32:49 np0005548915 systemd[1]: run-re26409a8f10a4530981ee1472405070e.service: Deactivated successfully.
Dec  6 04:32:51 np0005548915 python3.9[48873]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:32:51 np0005548915 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  6 04:32:51 np0005548915 systemd[1]: Stopped Network Manager Wait Online.
Dec  6 04:32:51 np0005548915 systemd[1]: Stopping Network Manager Wait Online...
Dec  6 04:32:51 np0005548915 systemd[1]: Stopping Network Manager...
Dec  6 04:32:51 np0005548915 NetworkManager[7201]: <info>  [1765013571.5747] caught SIGTERM, shutting down normally.
Dec  6 04:32:51 np0005548915 NetworkManager[7201]: <info>  [1765013571.5770] dhcp4 (eth0): canceled DHCP transaction
Dec  6 04:32:51 np0005548915 NetworkManager[7201]: <info>  [1765013571.5771] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  6 04:32:51 np0005548915 NetworkManager[7201]: <info>  [1765013571.5771] dhcp4 (eth0): state changed no lease
Dec  6 04:32:51 np0005548915 NetworkManager[7201]: <info>  [1765013571.5774] manager: NetworkManager state is now CONNECTED_SITE
Dec  6 04:32:51 np0005548915 NetworkManager[7201]: <info>  [1765013571.5859] exiting (success)
Dec  6 04:32:51 np0005548915 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  6 04:32:51 np0005548915 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  6 04:32:51 np0005548915 systemd[1]: Stopped Network Manager.
Dec  6 04:32:51 np0005548915 systemd[1]: NetworkManager.service: Consumed 10.244s CPU time, 4.1M memory peak, read 0B from disk, written 34.0K to disk.
Dec  6 04:32:51 np0005548915 systemd[1]: Starting Network Manager...
Dec  6 04:32:51 np0005548915 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.6489] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:eb1a7567-b576-49d7-a613-e357bf119324)
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.6492] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.6569] manager[0x55f1b750e090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  6 04:32:51 np0005548915 systemd[1]: Starting Hostname Service...
Dec  6 04:32:51 np0005548915 systemd[1]: Started Hostname Service.
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7556] hostname: hostname: using hostnamed
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7560] hostname: static hostname changed from (none) to "compute-0"
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7567] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7572] manager[0x55f1b750e090]: rfkill: Wi-Fi hardware radio set enabled
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7572] manager[0x55f1b750e090]: rfkill: WWAN hardware radio set enabled
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7598] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7608] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7609] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7610] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7610] manager: Networking is enabled by state file
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7613] settings: Loaded settings plugin: keyfile (internal)
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7617] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7650] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7663] dhcp: init: Using DHCP client 'internal'
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7666] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7672] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7678] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7689] device (lo): Activation: starting connection 'lo' (40483b14-1904-462e-975f-deec93e74606)
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7698] device (eth0): carrier: link connected
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7704] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7711] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7712] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7720] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7728] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7737] device (eth1): carrier: link connected
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7742] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7749] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (6151fa65-6cef-549f-91ba-9f68f8a2cb73) (indicated)
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7750] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7756] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7766] device (eth1): Activation: starting connection 'ci-private-network' (6151fa65-6cef-549f-91ba-9f68f8a2cb73)
Dec  6 04:32:51 np0005548915 systemd[1]: Started Network Manager.
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7779] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7815] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7825] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7831] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7836] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7844] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7850] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7857] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7866] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7893] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7898] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7909] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7924] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 systemd[1]: Starting Network Manager Wait Online...
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7957] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.7971] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.8046] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.8048] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.8053] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.8062] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.8068] device (lo): Activation: successful, device activated.
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.8076] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.8079] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.8082] device (eth1): Activation: successful, device activated.
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.8118] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.8119] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.8122] manager: NetworkManager state is now CONNECTED_SITE
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.8125] device (eth0): Activation: successful, device activated.
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.8131] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  6 04:32:51 np0005548915 NetworkManager[48882]: <info>  [1765013571.8134] manager: startup complete
Dec  6 04:32:51 np0005548915 systemd[1]: Finished Network Manager Wait Online.
Dec  6 04:32:52 np0005548915 python3.9[49099]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:32:57 np0005548915 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 04:32:57 np0005548915 systemd[1]: Starting man-db-cache-update.service...
Dec  6 04:32:57 np0005548915 systemd[1]: Reloading.
Dec  6 04:32:57 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:32:57 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:32:57 np0005548915 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 04:32:58 np0005548915 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 04:32:58 np0005548915 systemd[1]: Finished man-db-cache-update.service.
Dec  6 04:32:58 np0005548915 systemd[1]: run-r16c98f1ff1654240bf89332ff9e67ae7.service: Deactivated successfully.
Dec  6 04:33:01 np0005548915 python3.9[49557]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:33:01 np0005548915 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  6 04:33:02 np0005548915 python3.9[49709]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:33:03 np0005548915 python3.9[49863]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:33:04 np0005548915 python3.9[50015]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:33:04 np0005548915 python3.9[50167]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:33:05 np0005548915 python3.9[50319]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:33:06 np0005548915 python3.9[50471]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:33:07 np0005548915 python3.9[50594]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013585.9093392-647-226015058441756/.source _original_basename=.vs_ipouj follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:33:08 np0005548915 python3.9[50746]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:33:08 np0005548915 python3.9[50898]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  6 04:33:09 np0005548915 python3.9[51050]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:33:12 np0005548915 python3.9[51477]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  6 04:33:13 np0005548915 ansible-async_wrapper.py[51652]: Invoked with j224323356287 300 /home/zuul/.ansible/tmp/ansible-tmp-1765013592.6511362-845-179158062912121/AnsiballZ_edpm_os_net_config.py _
Dec  6 04:33:13 np0005548915 ansible-async_wrapper.py[51655]: Starting module and watcher
Dec  6 04:33:13 np0005548915 ansible-async_wrapper.py[51655]: Start watching 51656 (300)
Dec  6 04:33:13 np0005548915 ansible-async_wrapper.py[51656]: Start module (51656)
Dec  6 04:33:13 np0005548915 ansible-async_wrapper.py[51652]: Return async_wrapper task started.
Dec  6 04:33:13 np0005548915 python3.9[51657]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  6 04:33:14 np0005548915 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  6 04:33:14 np0005548915 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  6 04:33:14 np0005548915 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  6 04:33:14 np0005548915 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  6 04:33:14 np0005548915 kernel: cfg80211: failed to load regulatory.db
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.1358] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.1394] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2233] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2237] audit: op="connection-add" uuid="d06b796e-eff3-47e6-9580-60f48bdc3b4a" name="br-ex-br" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2259] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2261] audit: op="connection-add" uuid="f3ea87c2-8306-4b8f-9729-5c82dc71ef5e" name="br-ex-port" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2279] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2281] audit: op="connection-add" uuid="11b7839f-0ef4-4c44-998e-24bd4d572348" name="eth1-port" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2298] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2300] audit: op="connection-add" uuid="fefc03af-e59b-4845-a72f-9adee1229bca" name="vlan20-port" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2317] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2319] audit: op="connection-add" uuid="04ade02e-3073-47ab-a59b-268630689f01" name="vlan21-port" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2335] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2338] audit: op="connection-add" uuid="f7cf4ddb-e9d9-4376-a71b-b818cd6520cf" name="vlan22-port" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2353] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2356] audit: op="connection-add" uuid="9e3fc56d-219f-411a-be95-7518d29c56f3" name="vlan23-port" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2382] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,connection.timestamp,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2403] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2406] audit: op="connection-add" uuid="effa1e3c-ea27-4e29-92bf-336c557377b9" name="br-ex-if" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2453] audit: op="connection-update" uuid="6151fa65-6cef-549f-91ba-9f68f8a2cb73" name="ci-private-network" args="ovs-interface.type,ipv6.dns,ipv6.addr-gen-mode,ipv6.addresses,ipv6.method,ipv6.routes,ipv6.routing-rules,ipv4.dns,ipv4.addresses,ipv4.method,ipv4.never-default,ipv4.routes,ipv4.routing-rules,connection.port-type,connection.slave-type,connection.controller,connection.master,connection.timestamp,ovs-external-ids.data" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2475] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2477] audit: op="connection-add" uuid="7bc011b9-8b26-4ceb-8792-16af1c51b18b" name="vlan20-if" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2497] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2500] audit: op="connection-add" uuid="f40c8203-a736-44ba-b87c-e14feb441d1e" name="vlan21-if" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2521] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2524] audit: op="connection-add" uuid="b558af2b-19fb-4a8b-a432-c92141b38d13" name="vlan22-if" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2558] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2562] audit: op="connection-add" uuid="eccfa609-5d3a-480b-b546-ce9d96de7c68" name="vlan23-if" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2581] audit: op="connection-delete" uuid="801d2662-229c-3ec2-ab7b-8017b4489ad7" name="Wired connection 1" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2601] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2617] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2623] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (d06b796e-eff3-47e6-9580-60f48bdc3b4a)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2625] audit: op="connection-activate" uuid="d06b796e-eff3-47e6-9580-60f48bdc3b4a" name="br-ex-br" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2628] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2640] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2646] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (f3ea87c2-8306-4b8f-9729-5c82dc71ef5e)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2649] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2658] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2666] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (11b7839f-0ef4-4c44-998e-24bd4d572348)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2669] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2682] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2692] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (fefc03af-e59b-4845-a72f-9adee1229bca)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2696] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2710] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2717] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (04ade02e-3073-47ab-a59b-268630689f01)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2720] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2731] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2738] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (f7cf4ddb-e9d9-4376-a71b-b818cd6520cf)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2741] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2752] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2759] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (9e3fc56d-219f-411a-be95-7518d29c56f3)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2760] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2765] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2767] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2778] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2788] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2795] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (effa1e3c-ea27-4e29-92bf-336c557377b9)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2796] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2802] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2805] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2807] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2809] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2827] device (eth1): disconnecting for new activation request.
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2828] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2834] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2837] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2839] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2843] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2851] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2860] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (7bc011b9-8b26-4ceb-8792-16af1c51b18b)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2861] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2866] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2870] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2872] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2877] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2885] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2893] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (f40c8203-a736-44ba-b87c-e14feb441d1e)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2894] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2900] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2902] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2905] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2910] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2918] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2926] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (b558af2b-19fb-4a8b-a432-c92141b38d13)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2927] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2931] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2935] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2937] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2943] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2950] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2959] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (eccfa609-5d3a-480b-b546-ce9d96de7c68)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2961] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2968] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2972] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2975] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.2978] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3001] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3005] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3011] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3014] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3024] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3030] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3037] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 kernel: ovs-system: entered promiscuous mode
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3042] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3045] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3053] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3061] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 systemd-udevd[51663]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3066] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3072] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3080] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 kernel: Timeout policy base is empty
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3084] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3088] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3090] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3096] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3101] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3105] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3106] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3112] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3118] dhcp4 (eth0): canceled DHCP transaction
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3118] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3119] dhcp4 (eth0): state changed no lease
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3120] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3132] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3138] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51658 uid=0 result="fail" reason="Device is not activated"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3142] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3152] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3192] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  6 04:33:16 np0005548915 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3210] device (eth1): disconnecting for new activation request.
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3212] audit: op="connection-activate" uuid="6151fa65-6cef-549f-91ba-9f68f8a2cb73" name="ci-private-network" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3212] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3354] device (eth1): Activation: starting connection 'ci-private-network' (6151fa65-6cef-549f-91ba-9f68f8a2cb73)
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3359] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3365] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3384] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3387] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3395] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3399] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3403] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3405] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3406] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3407] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3408] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3410] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3411] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51658 uid=0 result="success"
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3413] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3420] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3427] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3430] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3436] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3439] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3443] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3446] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3450] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3454] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3459] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3462] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3466] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3471] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3474] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 kernel: br-ex: entered promiscuous mode
Dec  6 04:33:16 np0005548915 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3524] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3527] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3532] device (eth1): Activation: successful, device activated.
Dec  6 04:33:16 np0005548915 kernel: vlan22: entered promiscuous mode
Dec  6 04:33:16 np0005548915 systemd-udevd[51662]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3635] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3645] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 kernel: vlan20: entered promiscuous mode
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3665] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3667] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3673] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3716] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3723] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3742] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3743] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3748] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  6 04:33:16 np0005548915 kernel: vlan21: entered promiscuous mode
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3794] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3802] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3826] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3827] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3832] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  6 04:33:16 np0005548915 kernel: vlan23: entered promiscuous mode
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3876] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3884] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3958] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3965] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3998] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.3999] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.4005] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.4011] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.4013] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 04:33:16 np0005548915 NetworkManager[48882]: <info>  [1765013596.4018] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  6 04:33:17 np0005548915 NetworkManager[48882]: <info>  [1765013597.0530] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Dec  6 04:33:17 np0005548915 NetworkManager[48882]: <info>  [1765013597.5150] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51658 uid=0 result="success"
Dec  6 04:33:17 np0005548915 python3.9[52015]: ansible-ansible.legacy.async_status Invoked with jid=j224323356287.51652 mode=status _async_dir=/root/.ansible_async
Dec  6 04:33:17 np0005548915 NetworkManager[48882]: <info>  [1765013597.8036] checkpoint[0x55f1b74e3950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  6 04:33:17 np0005548915 NetworkManager[48882]: <info>  [1765013597.8041] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51658 uid=0 result="success"
Dec  6 04:33:18 np0005548915 NetworkManager[48882]: <info>  [1765013598.2393] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51658 uid=0 result="success"
Dec  6 04:33:18 np0005548915 NetworkManager[48882]: <info>  [1765013598.2407] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51658 uid=0 result="success"
Dec  6 04:33:18 np0005548915 NetworkManager[48882]: <info>  [1765013598.5577] audit: op="networking-control" arg="global-dns-configuration" pid=51658 uid=0 result="success"
Dec  6 04:33:18 np0005548915 NetworkManager[48882]: <info>  [1765013598.5620] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  6 04:33:18 np0005548915 NetworkManager[48882]: <info>  [1765013598.5661] audit: op="networking-control" arg="global-dns-configuration" pid=51658 uid=0 result="success"
Dec  6 04:33:18 np0005548915 NetworkManager[48882]: <info>  [1765013598.5698] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51658 uid=0 result="success"
Dec  6 04:33:18 np0005548915 ansible-async_wrapper.py[51655]: 51656 still running (300)
Dec  6 04:33:18 np0005548915 NetworkManager[48882]: <info>  [1765013598.8092] checkpoint[0x55f1b74e3a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  6 04:33:18 np0005548915 NetworkManager[48882]: <info>  [1765013598.8096] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51658 uid=0 result="success"
Dec  6 04:33:18 np0005548915 ansible-async_wrapper.py[51656]: Module complete (51656)
Dec  6 04:33:21 np0005548915 python3.9[52121]: ansible-ansible.legacy.async_status Invoked with jid=j224323356287.51652 mode=status _async_dir=/root/.ansible_async
Dec  6 04:33:21 np0005548915 python3.9[52221]: ansible-ansible.legacy.async_status Invoked with jid=j224323356287.51652 mode=cleanup _async_dir=/root/.ansible_async
Dec  6 04:33:21 np0005548915 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  6 04:33:22 np0005548915 python3.9[52375]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:33:23 np0005548915 python3.9[52498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013602.007697-926-194097857807042/.source.returncode _original_basename=.9o4iarjp follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:33:23 np0005548915 ansible-async_wrapper.py[51655]: Done in kid B.
Dec  6 04:33:24 np0005548915 python3.9[52650]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:33:24 np0005548915 python3.9[52774]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013603.6508775-974-95559424194779/.source.cfg _original_basename=.rf6_845k follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:33:25 np0005548915 python3.9[52926]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:33:25 np0005548915 systemd[1]: Reloading Network Manager...
Dec  6 04:33:25 np0005548915 NetworkManager[48882]: <info>  [1765013605.9427] audit: op="reload" arg="0" pid=52930 uid=0 result="success"
Dec  6 04:33:25 np0005548915 NetworkManager[48882]: <info>  [1765013605.9438] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  6 04:33:25 np0005548915 systemd[1]: Reloaded Network Manager.
Dec  6 04:33:26 np0005548915 systemd[1]: session-10.scope: Deactivated successfully.
Dec  6 04:33:26 np0005548915 systemd[1]: session-10.scope: Consumed 53.797s CPU time.
Dec  6 04:33:26 np0005548915 systemd-logind[795]: Session 10 logged out. Waiting for processes to exit.
Dec  6 04:33:26 np0005548915 systemd-logind[795]: Removed session 10.
Dec  6 04:33:32 np0005548915 systemd-logind[795]: New session 11 of user zuul.
Dec  6 04:33:32 np0005548915 systemd[1]: Started Session 11 of User zuul.
Dec  6 04:33:33 np0005548915 python3.9[53115]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:33:34 np0005548915 python3.9[53270]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:33:35 np0005548915 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  6 04:33:37 np0005548915 python3.9[53465]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:33:37 np0005548915 systemd[1]: session-11.scope: Deactivated successfully.
Dec  6 04:33:37 np0005548915 systemd[1]: session-11.scope: Consumed 2.752s CPU time.
Dec  6 04:33:37 np0005548915 systemd-logind[795]: Session 11 logged out. Waiting for processes to exit.
Dec  6 04:33:37 np0005548915 systemd-logind[795]: Removed session 11.
Dec  6 04:33:43 np0005548915 systemd-logind[795]: New session 12 of user zuul.
Dec  6 04:33:43 np0005548915 systemd[1]: Started Session 12 of User zuul.
Dec  6 04:33:45 np0005548915 python3.9[53646]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:33:46 np0005548915 python3.9[53800]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:33:47 np0005548915 python3.9[53957]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:33:48 np0005548915 python3.9[54041]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:33:50 np0005548915 python3.9[54194]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:33:52 np0005548915 python3.9[54389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:33:53 np0005548915 python3.9[54541]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:33:53 np0005548915 podman[54542]: 2025-12-06 09:33:53.13376544 +0000 UTC m=+0.074602575 system refresh
Dec  6 04:33:54 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:33:54 np0005548915 python3.9[54704]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:33:55 np0005548915 python3.9[54827]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013633.5340986-197-257995235937897/.source.json follow=False _original_basename=podman_network_config.j2 checksum=03deeea959a9993f39215aad2a3d3f6b4484abaa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:33:55 np0005548915 python3.9[54979]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:33:56 np0005548915 python3.9[55102]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013635.2761316-242-228714727878413/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:33:57 np0005548915 python3.9[55254]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:33:58 np0005548915 python3.9[55406]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:33:58 np0005548915 python3.9[55558]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:33:59 np0005548915 python3.9[55710]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:34:00 np0005548915 python3.9[55862]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:34:03 np0005548915 python3.9[56015]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:34:04 np0005548915 python3.9[56169]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:34:05 np0005548915 python3.9[56321]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:34:06 np0005548915 python3.9[56473]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:34:07 np0005548915 python3.9[56626]: ansible-service_facts Invoked
Dec  6 04:34:07 np0005548915 network[56643]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 04:34:07 np0005548915 network[56644]: 'network-scripts' will be removed from distribution in near future.
Dec  6 04:34:07 np0005548915 network[56645]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 04:34:14 np0005548915 python3.9[57097]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:34:18 np0005548915 python3.9[57250]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  6 04:34:19 np0005548915 python3.9[57402]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:34:20 np0005548915 python3.9[57527]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013659.0654032-674-151548220009269/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:34:21 np0005548915 python3.9[57681]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:34:21 np0005548915 python3.9[57806]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013660.697661-719-159048564894170/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:34:23 np0005548915 python3.9[57960]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:34:25 np0005548915 python3.9[58114]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:34:26 np0005548915 python3.9[58198]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:34:28 np0005548915 python3.9[58352]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:34:29 np0005548915 python3.9[58436]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:34:29 np0005548915 chronyd[778]: chronyd exiting
Dec  6 04:34:29 np0005548915 systemd[1]: Stopping NTP client/server...
Dec  6 04:34:29 np0005548915 systemd[1]: chronyd.service: Deactivated successfully.
Dec  6 04:34:29 np0005548915 systemd[1]: Stopped NTP client/server.
Dec  6 04:34:29 np0005548915 systemd[1]: Starting NTP client/server...
Dec  6 04:34:29 np0005548915 chronyd[58445]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  6 04:34:29 np0005548915 chronyd[58445]: Frequency -26.315 +/- 0.691 ppm read from /var/lib/chrony/drift
Dec  6 04:34:29 np0005548915 chronyd[58445]: Loaded seccomp filter (level 2)
Dec  6 04:34:29 np0005548915 systemd[1]: Started NTP client/server.
Dec  6 04:34:30 np0005548915 systemd[1]: session-12.scope: Deactivated successfully.
Dec  6 04:34:30 np0005548915 systemd[1]: session-12.scope: Consumed 29.917s CPU time.
Dec  6 04:34:30 np0005548915 systemd-logind[795]: Session 12 logged out. Waiting for processes to exit.
Dec  6 04:34:30 np0005548915 systemd-logind[795]: Removed session 12.
Dec  6 04:34:36 np0005548915 systemd-logind[795]: New session 13 of user zuul.
Dec  6 04:34:36 np0005548915 systemd[1]: Started Session 13 of User zuul.
Dec  6 04:34:36 np0005548915 python3.9[58626]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:34:38 np0005548915 python3.9[58778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:34:38 np0005548915 python3.9[58901]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013677.2623353-62-225085314951969/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:34:39 np0005548915 systemd[1]: session-13.scope: Deactivated successfully.
Dec  6 04:34:39 np0005548915 systemd[1]: session-13.scope: Consumed 1.993s CPU time.
Dec  6 04:34:39 np0005548915 systemd-logind[795]: Session 13 logged out. Waiting for processes to exit.
Dec  6 04:34:39 np0005548915 systemd-logind[795]: Removed session 13.
Dec  6 04:34:45 np0005548915 systemd-logind[795]: New session 14 of user zuul.
Dec  6 04:34:45 np0005548915 systemd[1]: Started Session 14 of User zuul.
Dec  6 04:34:47 np0005548915 python3.9[59079]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:34:48 np0005548915 python3.9[59235]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:34:49 np0005548915 python3.9[59411]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:34:49 np0005548915 python3.9[59534]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765013688.4933412-83-79513902301521/.source.json _original_basename=.m2p3csdt follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:34:51 np0005548915 python3.9[59686]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:34:51 np0005548915 python3.9[59809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013690.6576765-152-192348103060886/.source _original_basename=.l0tuqyj3 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:34:52 np0005548915 python3.9[59961]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:34:53 np0005548915 python3.9[60113]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:34:54 np0005548915 python3.9[60236]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013693.0702667-224-239507521430327/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:34:55 np0005548915 python3.9[60388]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:34:55 np0005548915 python3.9[60511]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013694.6037858-224-214058468494415/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:34:56 np0005548915 python3.9[60663]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:34:57 np0005548915 python3.9[60815]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:34:58 np0005548915 python3.9[60938]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013696.9197714-335-236412990063071/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:34:59 np0005548915 python3.9[61090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:34:59 np0005548915 python3.9[61213]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013698.5876825-380-144554390355964/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:01 np0005548915 python3.9[61365]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:35:01 np0005548915 systemd[1]: Reloading.
Dec  6 04:35:01 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:35:01 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:35:01 np0005548915 systemd[1]: Reloading.
Dec  6 04:35:01 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:35:01 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:35:01 np0005548915 systemd[1]: Starting EDPM Container Shutdown...
Dec  6 04:35:01 np0005548915 systemd[1]: Finished EDPM Container Shutdown.
Dec  6 04:35:02 np0005548915 python3.9[61594]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:35:03 np0005548915 python3.9[61717]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013702.0171874-449-257194567764607/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:04 np0005548915 python3.9[61869]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:35:04 np0005548915 python3.9[61992]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013703.4697819-494-159921600893920/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:05 np0005548915 python3.9[62144]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:35:05 np0005548915 systemd[1]: Reloading.
Dec  6 04:35:05 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:35:05 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:35:05 np0005548915 systemd[1]: Reloading.
Dec  6 04:35:06 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:35:06 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:35:06 np0005548915 systemd[1]: Starting Create netns directory...
Dec  6 04:35:06 np0005548915 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  6 04:35:06 np0005548915 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  6 04:35:06 np0005548915 systemd[1]: Finished Create netns directory.
Dec  6 04:35:07 np0005548915 python3.9[62370]: ansible-ansible.builtin.service_facts Invoked
Dec  6 04:35:07 np0005548915 network[62387]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 04:35:07 np0005548915 network[62388]: 'network-scripts' will be removed from distribution in near future.
Dec  6 04:35:07 np0005548915 network[62389]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 04:35:13 np0005548915 python3.9[62651]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:35:13 np0005548915 systemd[1]: Reloading.
Dec  6 04:35:13 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:35:13 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:35:13 np0005548915 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  6 04:35:14 np0005548915 iptables.init[62691]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  6 04:35:14 np0005548915 iptables.init[62691]: iptables: Flushing firewall rules: [  OK  ]
Dec  6 04:35:14 np0005548915 systemd[1]: iptables.service: Deactivated successfully.
Dec  6 04:35:14 np0005548915 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  6 04:35:15 np0005548915 python3.9[62888]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:35:17 np0005548915 python3.9[63042]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:35:17 np0005548915 systemd[1]: Reloading.
Dec  6 04:35:17 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:35:17 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:35:17 np0005548915 systemd[1]: Starting Netfilter Tables...
Dec  6 04:35:17 np0005548915 systemd[1]: Finished Netfilter Tables.
Dec  6 04:35:18 np0005548915 python3.9[63234]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:35:20 np0005548915 python3.9[63387]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:35:20 np0005548915 python3.9[63512]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013719.481677-701-250506154001412/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:21 np0005548915 python3.9[63665]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:35:22 np0005548915 systemd[1]: Reloading OpenSSH server daemon...
Dec  6 04:35:22 np0005548915 systemd[1]: Reloaded OpenSSH server daemon.
Dec  6 04:35:23 np0005548915 python3.9[63821]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:24 np0005548915 python3.9[63973]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:35:25 np0005548915 python3.9[64096]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013723.8908403-794-209916110352718/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:26 np0005548915 python3.9[64248]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  6 04:35:26 np0005548915 systemd[1]: Starting Time & Date Service...
Dec  6 04:35:26 np0005548915 systemd[1]: Started Time & Date Service.
Dec  6 04:35:28 np0005548915 python3.9[64404]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:29 np0005548915 python3.9[64556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:35:29 np0005548915 python3.9[64679]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013728.649338-899-34963519501867/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:30 np0005548915 python3.9[64831]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:35:31 np0005548915 python3.9[64954]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013730.1750224-944-99278301765048/.source.yaml _original_basename=.vh4w45ip follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:32 np0005548915 python3.9[65106]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:35:32 np0005548915 python3.9[65229]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013731.7116563-989-145585259462717/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:33 np0005548915 python3.9[65381]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:35:34 np0005548915 python3.9[65534]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:35:35 np0005548915 python3[65687]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  6 04:35:36 np0005548915 python3.9[65839]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:35:37 np0005548915 python3.9[65962]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013735.7543647-1106-264876055125822/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:37 np0005548915 python3.9[66114]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:35:38 np0005548915 python3.9[66237]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013737.2825475-1151-218966978741015/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:39 np0005548915 python3.9[66389]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:35:40 np0005548915 python3.9[66512]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013738.8889432-1196-219208165684260/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:41 np0005548915 python3.9[66664]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:35:41 np0005548915 python3.9[66787]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013740.4591594-1241-82489744974300/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:42 np0005548915 python3.9[66939]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:35:43 np0005548915 python3.9[67062]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013741.928661-1286-154263044823477/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:43 np0005548915 python3.9[67214]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:44 np0005548915 python3.9[67366]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:35:46 np0005548915 python3.9[67525]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:47 np0005548915 python3.9[67678]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:48 np0005548915 python3.9[67830]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:35:49 np0005548915 python3.9[67982]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  6 04:35:50 np0005548915 python3.9[68135]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  6 04:35:51 np0005548915 systemd[1]: session-14.scope: Deactivated successfully.
Dec  6 04:35:51 np0005548915 systemd[1]: session-14.scope: Consumed 43.686s CPU time.
Dec  6 04:35:51 np0005548915 systemd-logind[795]: Session 14 logged out. Waiting for processes to exit.
Dec  6 04:35:51 np0005548915 systemd-logind[795]: Removed session 14.
Dec  6 04:35:56 np0005548915 systemd-logind[795]: New session 15 of user zuul.
Dec  6 04:35:56 np0005548915 systemd[1]: Started Session 15 of User zuul.
Dec  6 04:35:56 np0005548915 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  6 04:35:57 np0005548915 python3.9[68318]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  6 04:35:58 np0005548915 python3.9[68470]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:35:59 np0005548915 python3.9[68622]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:36:00 np0005548915 python3.9[68774]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtvqYC0W0zPSX/plyJvm0q1VGDScYTNlcCdllukOe81JRfU3GhVusPZOX0xRSaLP/lmXtfqWcbBRCkLsmFrAo2EHn1CMqMr5WkhY4+rgApF+MGLDOUo57tlKZLPIwdL0SSY/Qv8lBfrqr7LUDZ7fTTTbqTzim/bncxg/u0KxSWBdvjfmYi13SwO65wDkFqSVYa3h8DNij6cRRjQ0fJuJ9Da860hmMnqo9GJMU6dq3zMXXn3YfuF4E4M0UQdlWmVW4EwBTzsfA1XYbSpW7VdRJw6esB4vZ9/Succj+XZiANoDqL9gXSEjNXVVWVbL/7aGJJF9LLQ3VVxmHdbYs1NcTI6Yy9d61zDJHnK/nlYHMhmAHxiDsZEpv0xF72LLzaI86xxvnbx4eUpnyW6LnKiUCYUAUrWIMpLiIbWUxeIoYmj9rqLhwlo5kCy7WdCYYEMTtGI53oIyU0EbXf/r4WAuzmqpVRPyc2Sd5tYD4aXh1JZLUcZy+NLR0Y4SA8RflKFcs=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFDJYF6pUvFgGUbY2QEOHAq7ZEhRQJUqPTVPOuTyb476#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPJ19afQPeSMtr3O9L1fe5+bNzTAsOOCA5fLihUdryDYc29KKD+0XABHKIvqeefcCsIBjZRA//9OzCUftfvXK9A=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAiB67qk/R3IfGpcAH1Ojopc8KX94De+Kxs31cKQLD04X+4QRXPRdMxU85LOhN58eKoHaBi8cgqk7+dvRypGD5vbtbRN9r0VN7tGwiSQTlVFbEuhn0AEbnRwNAMWEEMHO9kEjufP4N2zEEhtQBXy9oO2tMX3+BX4Z3YZZMQyZUgohdBHp2VCul9VdRuo0oHSr8HHm0nN61dMjalnThmgkGAu5hG8qhkWT4i9hroSKBsR5kVBUFTqdXekYkVy4YIYfM2lBXiMOFHtvr1a+KOyIfgWMb7GBPW7oKqtzCfVgSbGaUhSvGzs1OWt3U/PjjapIlmDnwD5ukzVxWV5ldh0vA48tXh5R1wqAoN5/Y/RiAKaY2kd/fvtkhvVDGZluXOz5jJ02IFHm+v4dP3Ig8YOuS5BEkWFuJHkblW0t/+4siTHWwmGEuvUI6y8Gb2pGcBKsWCJtLePYzT09IAmrjwO0jAgbWy0nvCZ+SKlbBBrXP6OgNgMkA+GH9iGOl6FOuRok=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGYNj3LmNvR0emoQHuuy9NKXPivs/dznunVy8GExnJl8#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJhKmGSvg8FMw16qKPzk6Pyj+OHkN3bmk20mts1PdCRcNRnn9sT1DgI6U8Aze1tjGPujT4eDL+Y9r/hsrfM4qDc=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDneZurSARwLaZA1xEymzXlvVAPvP8u0PCrqXuMYD5ewImDDChRITnk4XHKT/DUfrSJf9/7oJsddEbLRjhCtedqrMZsCkWz1BxtCmPBuvz2LfFhEn27TjqYLctOVGigQGsj6ILvPOzzLiapd93yApWDmH6P0un/ltmdM0iZLygNpzG3HLF8STBXzlo/8slci69Em7XppcrOpl1TS7DaVlpNcRQvo9pFuIrbMD9g0DOdMwk5YCH6g7OzGWqq0gt0YUOztmsqxWHKav3E0SXAD/vkgRc/1ZCNGFNSvf0dIgimCF3xlNWrppnvNgQ1BRqiQ7RArlOp1bVg0Ugdce6f4TIrq36Ois2U5+/myF5WQ7l9hRMRvoP64hSSsRAIDobTI/zMStUP3iZPFngxDxwQtpydHfFGywBL9811c42U7JsGxE8890uOIDk/oOkyhSH6KHQCPFjmKBJ98nT01lgnXyFSNOqds6QOYBasUWNFWd2wS7YpTheGlVVM8bk/gB4K2L0=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMkn8zp09tRuEaH/bUoP0rYj+dziM1KcqMKxOgM9K1U#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCrMdvJJYP0cflC7RDFsxwr66nSp9R7QU726CAfJcKLw6vHh8Z9Lw5wLH0kiaSpsb6SAPffloplHEDiwTOkghOc=#012 create=True mode=0644 path=/tmp/ansible.b3twgp32 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:36:01 np0005548915 python3.9[68926]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.b3twgp32' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:36:02 np0005548915 python3.9[69080]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.b3twgp32 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:36:02 np0005548915 systemd[1]: session-15.scope: Deactivated successfully.
Dec  6 04:36:02 np0005548915 systemd[1]: session-15.scope: Consumed 4.219s CPU time.
Dec  6 04:36:02 np0005548915 systemd-logind[795]: Session 15 logged out. Waiting for processes to exit.
Dec  6 04:36:02 np0005548915 systemd-logind[795]: Removed session 15.
Dec  6 04:36:08 np0005548915 systemd-logind[795]: New session 16 of user zuul.
Dec  6 04:36:08 np0005548915 systemd[1]: Started Session 16 of User zuul.
Dec  6 04:36:09 np0005548915 python3.9[69258]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:36:10 np0005548915 python3.9[69414]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  6 04:36:11 np0005548915 python3.9[69568]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:36:12 np0005548915 python3.9[69721]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:36:13 np0005548915 python3.9[69874]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:36:14 np0005548915 python3.9[70028]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:36:15 np0005548915 python3.9[70183]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:36:16 np0005548915 systemd[1]: session-16.scope: Deactivated successfully.
Dec  6 04:36:16 np0005548915 systemd[1]: session-16.scope: Consumed 5.180s CPU time.
Dec  6 04:36:16 np0005548915 systemd-logind[795]: Session 16 logged out. Waiting for processes to exit.
Dec  6 04:36:16 np0005548915 systemd-logind[795]: Removed session 16.
Dec  6 04:36:21 np0005548915 systemd-logind[795]: New session 17 of user zuul.
Dec  6 04:36:21 np0005548915 systemd[1]: Started Session 17 of User zuul.
Dec  6 04:36:22 np0005548915 python3.9[70361]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:36:23 np0005548915 python3.9[70517]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:36:24 np0005548915 python3.9[70601]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  6 04:36:27 np0005548915 python3.9[70752]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:36:28 np0005548915 python3.9[70903]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  6 04:36:29 np0005548915 python3.9[71053]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:36:29 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 04:36:29 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 04:36:30 np0005548915 python3.9[71204]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:36:30 np0005548915 systemd[1]: session-17.scope: Deactivated successfully.
Dec  6 04:36:30 np0005548915 systemd[1]: session-17.scope: Consumed 6.214s CPU time.
Dec  6 04:36:30 np0005548915 systemd-logind[795]: Session 17 logged out. Waiting for processes to exit.
Dec  6 04:36:30 np0005548915 systemd-logind[795]: Removed session 17.
Dec  6 04:36:39 np0005548915 chronyd[58445]: Selected source 23.133.168.246 (pool.ntp.org)
Dec  6 04:36:41 np0005548915 systemd-logind[795]: New session 18 of user zuul.
Dec  6 04:36:41 np0005548915 systemd[1]: Started Session 18 of User zuul.
Dec  6 04:36:49 np0005548915 python3[71971]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:36:51 np0005548915 python3[72066]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  6 04:36:53 np0005548915 python3[72093]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  6 04:36:53 np0005548915 python3[72119]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:36:53 np0005548915 kernel: loop: module loaded
Dec  6 04:36:53 np0005548915 kernel: loop3: detected capacity change from 0 to 41943040
Dec  6 04:36:53 np0005548915 python3[72154]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:36:53 np0005548915 lvm[72157]: PV /dev/loop3 not used.
Dec  6 04:36:54 np0005548915 lvm[72166]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:36:54 np0005548915 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec  6 04:36:54 np0005548915 lvm[72168]:  1 logical volume(s) in volume group "ceph_vg0" now active
Dec  6 04:36:54 np0005548915 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec  6 04:36:54 np0005548915 python3[72246]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:36:55 np0005548915 python3[72319]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765013814.3299854-36827-261843508893121/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:36:55 np0005548915 python3[72369]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:36:55 np0005548915 systemd[1]: Reloading.
Dec  6 04:36:55 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:36:55 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:36:56 np0005548915 systemd[1]: Starting Ceph OSD losetup...
Dec  6 04:36:56 np0005548915 bash[72409]: /dev/loop3: [64513]:4327963 (/var/lib/ceph-osd-0.img)
Dec  6 04:36:56 np0005548915 systemd[1]: Finished Ceph OSD losetup.
Dec  6 04:36:56 np0005548915 lvm[72410]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:36:56 np0005548915 lvm[72410]: VG ceph_vg0 finished
Dec  6 04:36:58 np0005548915 python3[72434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:37:00 np0005548915 python3[72527]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  6 04:37:03 np0005548915 python3[72584]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  6 04:37:06 np0005548915 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 04:37:06 np0005548915 systemd[1]: Starting man-db-cache-update.service...
Dec  6 04:37:07 np0005548915 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 04:37:07 np0005548915 systemd[1]: Finished man-db-cache-update.service.
Dec  6 04:37:07 np0005548915 systemd[1]: run-re7dc9dd4ed464e24bc713549f486bb07.service: Deactivated successfully.
Dec  6 04:37:07 np0005548915 python3[72699]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  6 04:37:07 np0005548915 python3[72727]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:37:07 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:37:07 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:37:08 np0005548915 python3[72791]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:37:08 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:37:08 np0005548915 python3[72817]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:37:09 np0005548915 python3[72895]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:37:10 np0005548915 python3[72968]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765013829.3170278-37019-36960213508375/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:37:10 np0005548915 python3[73070]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:37:11 np0005548915 python3[73143]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765013830.5843883-37037-38814438970557/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:37:11 np0005548915 python3[73193]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  6 04:37:12 np0005548915 python3[73221]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  6 04:37:12 np0005548915 python3[73249]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  6 04:37:12 np0005548915 python3[73277]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:37:13 np0005548915 systemd-logind[795]: New session 19 of user ceph-admin.
Dec  6 04:37:13 np0005548915 systemd[1]: Created slice User Slice of UID 42477.
Dec  6 04:37:13 np0005548915 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  6 04:37:13 np0005548915 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  6 04:37:13 np0005548915 systemd[1]: Starting User Manager for UID 42477...
Dec  6 04:37:13 np0005548915 systemd[73285]: Queued start job for default target Main User Target.
Dec  6 04:37:13 np0005548915 systemd[73285]: Created slice User Application Slice.
Dec  6 04:37:13 np0005548915 systemd[73285]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  6 04:37:13 np0005548915 systemd[73285]: Started Daily Cleanup of User's Temporary Directories.
Dec  6 04:37:13 np0005548915 systemd[73285]: Reached target Paths.
Dec  6 04:37:13 np0005548915 systemd[73285]: Reached target Timers.
Dec  6 04:37:13 np0005548915 systemd[73285]: Starting D-Bus User Message Bus Socket...
Dec  6 04:37:13 np0005548915 systemd[73285]: Starting Create User's Volatile Files and Directories...
Dec  6 04:37:13 np0005548915 systemd[73285]: Listening on D-Bus User Message Bus Socket.
Dec  6 04:37:13 np0005548915 systemd[73285]: Reached target Sockets.
Dec  6 04:37:13 np0005548915 systemd[73285]: Finished Create User's Volatile Files and Directories.
Dec  6 04:37:13 np0005548915 systemd[73285]: Reached target Basic System.
Dec  6 04:37:13 np0005548915 systemd[73285]: Reached target Main User Target.
Dec  6 04:37:13 np0005548915 systemd[73285]: Startup finished in 128ms.
Dec  6 04:37:13 np0005548915 systemd[1]: Started User Manager for UID 42477.
Dec  6 04:37:13 np0005548915 systemd[1]: Started Session 19 of User ceph-admin.
Dec  6 04:37:13 np0005548915 systemd[1]: session-19.scope: Deactivated successfully.
Dec  6 04:37:13 np0005548915 systemd-logind[795]: Session 19 logged out. Waiting for processes to exit.
Dec  6 04:37:13 np0005548915 systemd-logind[795]: Removed session 19.
Dec  6 04:37:13 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:37:13 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:37:16 np0005548915 systemd[1]: var-lib-containers-storage-overlay-compat2012127482-lower\x2dmapped.mount: Deactivated successfully.
Dec  6 04:37:23 np0005548915 systemd[1]: Stopping User Manager for UID 42477...
Dec  6 04:37:23 np0005548915 systemd[73285]: Activating special unit Exit the Session...
Dec  6 04:37:23 np0005548915 systemd[73285]: Stopped target Main User Target.
Dec  6 04:37:23 np0005548915 systemd[73285]: Stopped target Basic System.
Dec  6 04:37:23 np0005548915 systemd[73285]: Stopped target Paths.
Dec  6 04:37:23 np0005548915 systemd[73285]: Stopped target Sockets.
Dec  6 04:37:23 np0005548915 systemd[73285]: Stopped target Timers.
Dec  6 04:37:23 np0005548915 systemd[73285]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  6 04:37:23 np0005548915 systemd[73285]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  6 04:37:23 np0005548915 systemd[73285]: Closed D-Bus User Message Bus Socket.
Dec  6 04:37:23 np0005548915 systemd[73285]: Stopped Create User's Volatile Files and Directories.
Dec  6 04:37:23 np0005548915 systemd[73285]: Removed slice User Application Slice.
Dec  6 04:37:23 np0005548915 systemd[73285]: Reached target Shutdown.
Dec  6 04:37:23 np0005548915 systemd[73285]: Finished Exit the Session.
Dec  6 04:37:23 np0005548915 systemd[73285]: Reached target Exit the Session.
Dec  6 04:37:23 np0005548915 systemd[1]: user@42477.service: Deactivated successfully.
Dec  6 04:37:23 np0005548915 systemd[1]: Stopped User Manager for UID 42477.
Dec  6 04:37:23 np0005548915 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  6 04:37:23 np0005548915 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  6 04:37:23 np0005548915 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  6 04:37:23 np0005548915 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  6 04:37:23 np0005548915 systemd[1]: Removed slice User Slice of UID 42477.
Dec  6 04:37:33 np0005548915 podman[73378]: 2025-12-06 09:37:33.687801094 +0000 UTC m=+19.703466437 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:33 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:37:33 np0005548915 podman[73446]: 2025-12-06 09:37:33.792314542 +0000 UTC m=+0.070313437 container create 8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474 (image=quay.io/ceph/ceph:v19, name=brave_boyd, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  6 04:37:33 np0005548915 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  6 04:37:33 np0005548915 systemd[1]: Started libpod-conmon-8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474.scope.
Dec  6 04:37:33 np0005548915 podman[73446]: 2025-12-06 09:37:33.762541117 +0000 UTC m=+0.040540092 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:33 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:33 np0005548915 podman[73446]: 2025-12-06 09:37:33.920746267 +0000 UTC m=+0.198745232 container init 8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474 (image=quay.io/ceph/ceph:v19, name=brave_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 04:37:33 np0005548915 podman[73446]: 2025-12-06 09:37:33.932207863 +0000 UTC m=+0.210206768 container start 8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474 (image=quay.io/ceph/ceph:v19, name=brave_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Dec  6 04:37:33 np0005548915 podman[73446]: 2025-12-06 09:37:33.936718844 +0000 UTC m=+0.214717799 container attach 8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474 (image=quay.io/ceph/ceph:v19, name=brave_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 04:37:34 np0005548915 brave_boyd[73463]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec  6 04:37:34 np0005548915 systemd[1]: libpod-8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474.scope: Deactivated successfully.
Dec  6 04:37:34 np0005548915 podman[73446]: 2025-12-06 09:37:34.065348983 +0000 UTC m=+0.343347908 container died 8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474 (image=quay.io/ceph/ceph:v19, name=brave_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:37:34 np0005548915 systemd[1]: var-lib-containers-storage-overlay-8c214ddb913c3742d07df8ad18e863b695be8c3163b5e069e27c9a2e5f315238-merged.mount: Deactivated successfully.
Dec  6 04:37:34 np0005548915 podman[73446]: 2025-12-06 09:37:34.123099014 +0000 UTC m=+0.401097919 container remove 8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474 (image=quay.io/ceph/ceph:v19, name=brave_boyd, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:34 np0005548915 systemd[1]: libpod-conmon-8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474.scope: Deactivated successfully.
Dec  6 04:37:34 np0005548915 podman[73478]: 2025-12-06 09:37:34.222562417 +0000 UTC m=+0.067686577 container create 9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586 (image=quay.io/ceph/ceph:v19, name=happy_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:37:34 np0005548915 systemd[1]: Started libpod-conmon-9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586.scope.
Dec  6 04:37:34 np0005548915 podman[73478]: 2025-12-06 09:37:34.194641183 +0000 UTC m=+0.039765343 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:34 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:34 np0005548915 podman[73478]: 2025-12-06 09:37:34.312984069 +0000 UTC m=+0.158108239 container init 9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586 (image=quay.io/ceph/ceph:v19, name=happy_murdock, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec  6 04:37:34 np0005548915 podman[73478]: 2025-12-06 09:37:34.32239689 +0000 UTC m=+0.167521050 container start 9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586 (image=quay.io/ceph/ceph:v19, name=happy_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:37:34 np0005548915 happy_murdock[73495]: 167 167
Dec  6 04:37:34 np0005548915 podman[73478]: 2025-12-06 09:37:34.326795907 +0000 UTC m=+0.171920067 container attach 9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586 (image=quay.io/ceph/ceph:v19, name=happy_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  6 04:37:34 np0005548915 systemd[1]: libpod-9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586.scope: Deactivated successfully.
Dec  6 04:37:34 np0005548915 podman[73500]: 2025-12-06 09:37:34.376301247 +0000 UTC m=+0.030973737 container died 9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586 (image=quay.io/ceph/ceph:v19, name=happy_murdock, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  6 04:37:34 np0005548915 podman[73500]: 2025-12-06 09:37:34.421533854 +0000 UTC m=+0.076206314 container remove 9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586 (image=quay.io/ceph/ceph:v19, name=happy_murdock, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  6 04:37:34 np0005548915 systemd[1]: libpod-conmon-9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586.scope: Deactivated successfully.
Dec  6 04:37:34 np0005548915 podman[73515]: 2025-12-06 09:37:34.525248251 +0000 UTC m=+0.064631435 container create 1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6 (image=quay.io/ceph/ceph:v19, name=trusting_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:34 np0005548915 systemd[1]: Started libpod-conmon-1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6.scope.
Dec  6 04:37:34 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:34 np0005548915 podman[73515]: 2025-12-06 09:37:34.498272991 +0000 UTC m=+0.037656245 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:34 np0005548915 podman[73515]: 2025-12-06 09:37:34.604784552 +0000 UTC m=+0.144167776 container init 1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6 (image=quay.io/ceph/ceph:v19, name=trusting_jang, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  6 04:37:34 np0005548915 podman[73515]: 2025-12-06 09:37:34.614261314 +0000 UTC m=+0.153644518 container start 1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6 (image=quay.io/ceph/ceph:v19, name=trusting_jang, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec  6 04:37:34 np0005548915 podman[73515]: 2025-12-06 09:37:34.618878207 +0000 UTC m=+0.158261411 container attach 1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6 (image=quay.io/ceph/ceph:v19, name=trusting_jang, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:37:34 np0005548915 trusting_jang[73531]: AQBe+TNpCJmxJhAAKualQfzBdNpHXqNpZCg4iA==
Dec  6 04:37:34 np0005548915 systemd[1]: libpod-1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6.scope: Deactivated successfully.
Dec  6 04:37:34 np0005548915 podman[73515]: 2025-12-06 09:37:34.654400525 +0000 UTC m=+0.193783729 container died 1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6 (image=quay.io/ceph/ceph:v19, name=trusting_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:37:34 np0005548915 podman[73515]: 2025-12-06 09:37:34.705069567 +0000 UTC m=+0.244452761 container remove 1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6 (image=quay.io/ceph/ceph:v19, name=trusting_jang, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:37:34 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:37:34 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:37:34 np0005548915 systemd[1]: libpod-conmon-1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6.scope: Deactivated successfully.
Dec  6 04:37:34 np0005548915 podman[73551]: 2025-12-06 09:37:34.806739108 +0000 UTC m=+0.068954630 container create d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee (image=quay.io/ceph/ceph:v19, name=nice_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:37:34 np0005548915 systemd[1]: Started libpod-conmon-d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee.scope.
Dec  6 04:37:34 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:34 np0005548915 podman[73551]: 2025-12-06 09:37:34.779987355 +0000 UTC m=+0.042202937 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:34 np0005548915 podman[73551]: 2025-12-06 09:37:34.882813847 +0000 UTC m=+0.145029439 container init d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee (image=quay.io/ceph/ceph:v19, name=nice_hermann, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:37:34 np0005548915 podman[73551]: 2025-12-06 09:37:34.88893443 +0000 UTC m=+0.151149952 container start d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee (image=quay.io/ceph/ceph:v19, name=nice_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  6 04:37:34 np0005548915 podman[73551]: 2025-12-06 09:37:34.893175353 +0000 UTC m=+0.155390895 container attach d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee (image=quay.io/ceph/ceph:v19, name=nice_hermann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Dec  6 04:37:34 np0005548915 nice_hermann[73566]: AQBe+TNpeHp/NhAAz2vYIGRSDKK5iPaaLsWobw==
Dec  6 04:37:34 np0005548915 systemd[1]: libpod-d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee.scope: Deactivated successfully.
Dec  6 04:37:34 np0005548915 podman[73551]: 2025-12-06 09:37:34.920680717 +0000 UTC m=+0.182896219 container died d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee (image=quay.io/ceph/ceph:v19, name=nice_hermann, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 04:37:34 np0005548915 podman[73551]: 2025-12-06 09:37:34.963199811 +0000 UTC m=+0.225415313 container remove d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee (image=quay.io/ceph/ceph:v19, name=nice_hermann, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:37:34 np0005548915 systemd[1]: libpod-conmon-d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee.scope: Deactivated successfully.
Dec  6 04:37:35 np0005548915 podman[73587]: 2025-12-06 09:37:35.050361956 +0000 UTC m=+0.059704193 container create 7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4 (image=quay.io/ceph/ceph:v19, name=happy_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:37:35 np0005548915 systemd[1]: Started libpod-conmon-7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4.scope.
Dec  6 04:37:35 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:35 np0005548915 podman[73587]: 2025-12-06 09:37:35.023379996 +0000 UTC m=+0.032722283 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:35 np0005548915 podman[73587]: 2025-12-06 09:37:35.660644273 +0000 UTC m=+0.669986560 container init 7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4 (image=quay.io/ceph/ceph:v19, name=happy_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:37:35 np0005548915 podman[73587]: 2025-12-06 09:37:35.667385293 +0000 UTC m=+0.676727540 container start 7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4 (image=quay.io/ceph/ceph:v19, name=happy_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  6 04:37:35 np0005548915 happy_torvalds[73603]: AQBf+TNpklaXKRAAjaDdl7bhP4qVZ4du0ZTu+Q==
Dec  6 04:37:35 np0005548915 systemd[1]: libpod-7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4.scope: Deactivated successfully.
Dec  6 04:37:35 np0005548915 podman[73587]: 2025-12-06 09:37:35.971567106 +0000 UTC m=+0.980909413 container attach 7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4 (image=quay.io/ceph/ceph:v19, name=happy_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:37:35 np0005548915 podman[73587]: 2025-12-06 09:37:35.972235934 +0000 UTC m=+0.981578171 container died 7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4 (image=quay.io/ceph/ceph:v19, name=happy_torvalds, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:38 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a84ace5647d6ef5214854da59463f86833e8d4ba258fe7a735a40321daded3bc-merged.mount: Deactivated successfully.
Dec  6 04:37:38 np0005548915 podman[73587]: 2025-12-06 09:37:38.450959056 +0000 UTC m=+3.460301303 container remove 7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4 (image=quay.io/ceph/ceph:v19, name=happy_torvalds, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  6 04:37:38 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:37:38 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:37:38 np0005548915 systemd[1]: libpod-conmon-7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4.scope: Deactivated successfully.
Dec  6 04:37:38 np0005548915 podman[73625]: 2025-12-06 09:37:38.535539682 +0000 UTC m=+0.054509104 container create 4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e (image=quay.io/ceph/ceph:v19, name=admiring_rhodes, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:38 np0005548915 systemd[1]: Started libpod-conmon-4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e.scope.
Dec  6 04:37:38 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbcf53142a05666f10abd497e82241d956ccc22802545d416deaf2058a4028d4/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:38 np0005548915 podman[73625]: 2025-12-06 09:37:38.606057903 +0000 UTC m=+0.125027365 container init 4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e (image=quay.io/ceph/ceph:v19, name=admiring_rhodes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  6 04:37:38 np0005548915 podman[73625]: 2025-12-06 09:37:38.517999434 +0000 UTC m=+0.036968866 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:38 np0005548915 podman[73625]: 2025-12-06 09:37:38.615036983 +0000 UTC m=+0.134006425 container start 4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e (image=quay.io/ceph/ceph:v19, name=admiring_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 04:37:38 np0005548915 podman[73625]: 2025-12-06 09:37:38.619373018 +0000 UTC m=+0.138342480 container attach 4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e (image=quay.io/ceph/ceph:v19, name=admiring_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 04:37:38 np0005548915 admiring_rhodes[73641]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec  6 04:37:38 np0005548915 admiring_rhodes[73641]: setting min_mon_release = quincy
Dec  6 04:37:38 np0005548915 admiring_rhodes[73641]: /usr/bin/monmaptool: set fsid to 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec  6 04:37:38 np0005548915 admiring_rhodes[73641]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec  6 04:37:38 np0005548915 systemd[1]: libpod-4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e.scope: Deactivated successfully.
Dec  6 04:37:38 np0005548915 podman[73625]: 2025-12-06 09:37:38.668682734 +0000 UTC m=+0.187652206 container died 4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e (image=quay.io/ceph/ceph:v19, name=admiring_rhodes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:37:38 np0005548915 podman[73625]: 2025-12-06 09:37:38.716821267 +0000 UTC m=+0.235790689 container remove 4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e (image=quay.io/ceph/ceph:v19, name=admiring_rhodes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:37:38 np0005548915 systemd[1]: libpod-conmon-4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e.scope: Deactivated successfully.
Dec  6 04:37:38 np0005548915 podman[73662]: 2025-12-06 09:37:38.802352758 +0000 UTC m=+0.057191216 container create 50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:38 np0005548915 systemd[1]: Started libpod-conmon-50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4.scope.
Dec  6 04:37:38 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7994fddadcfeecd4e1bc53d6c4fde02686373badafe0ae1b6de7ecde39276a8f/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7994fddadcfeecd4e1bc53d6c4fde02686373badafe0ae1b6de7ecde39276a8f/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7994fddadcfeecd4e1bc53d6c4fde02686373badafe0ae1b6de7ecde39276a8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7994fddadcfeecd4e1bc53d6c4fde02686373badafe0ae1b6de7ecde39276a8f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:38 np0005548915 podman[73662]: 2025-12-06 09:37:38.781252666 +0000 UTC m=+0.036091154 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:38 np0005548915 podman[73662]: 2025-12-06 09:37:38.952021131 +0000 UTC m=+0.206859609 container init 50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:38 np0005548915 podman[73662]: 2025-12-06 09:37:38.960917158 +0000 UTC m=+0.215755626 container start 50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  6 04:37:38 np0005548915 podman[73662]: 2025-12-06 09:37:38.965296085 +0000 UTC m=+0.220134553 container attach 50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:37:39 np0005548915 systemd[1]: libpod-50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4.scope: Deactivated successfully.
Dec  6 04:37:39 np0005548915 podman[73662]: 2025-12-06 09:37:39.129823252 +0000 UTC m=+0.384661720 container died 50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:39 np0005548915 podman[73662]: 2025-12-06 09:37:39.181149952 +0000 UTC m=+0.435988410 container remove 50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  6 04:37:39 np0005548915 systemd[1]: libpod-conmon-50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4.scope: Deactivated successfully.
Dec  6 04:37:39 np0005548915 systemd[1]: Reloading.
Dec  6 04:37:39 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:37:39 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:37:39 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:37:39 np0005548915 systemd[1]: var-lib-containers-storage-overlay-dbcf53142a05666f10abd497e82241d956ccc22802545d416deaf2058a4028d4-merged.mount: Deactivated successfully.
Dec  6 04:37:39 np0005548915 systemd[1]: Reloading.
Dec  6 04:37:39 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:37:39 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:37:39 np0005548915 systemd[1]: Reached target All Ceph clusters and services.
Dec  6 04:37:39 np0005548915 systemd[1]: Reloading.
Dec  6 04:37:39 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:37:39 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:37:40 np0005548915 systemd[1]: Reached target Ceph cluster 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:37:40 np0005548915 systemd[1]: Reloading.
Dec  6 04:37:40 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:37:40 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:37:40 np0005548915 systemd[1]: Reloading.
Dec  6 04:37:40 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:37:40 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:37:40 np0005548915 systemd[1]: Created slice Slice /system/ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:37:40 np0005548915 systemd[1]: Reached target System Time Set.
Dec  6 04:37:40 np0005548915 systemd[1]: Reached target System Time Synchronized.
Dec  6 04:37:40 np0005548915 systemd[1]: Starting Ceph mon.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:37:40 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:37:40 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:37:41 np0005548915 podman[73958]: 2025-12-06 09:37:41.047966283 +0000 UTC m=+0.064155732 container create 5076c320e38e45a94f5fb7726329edcc2b8a7e5bff5175af100943d275cd2992 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  6 04:37:41 np0005548915 podman[73958]: 2025-12-06 09:37:41.02010556 +0000 UTC m=+0.036295039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89e6ac69aa7547a2e7e76bd4456bafe35b3ffc299c45a52b9e951d32ddc733e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89e6ac69aa7547a2e7e76bd4456bafe35b3ffc299c45a52b9e951d32ddc733e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89e6ac69aa7547a2e7e76bd4456bafe35b3ffc299c45a52b9e951d32ddc733e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89e6ac69aa7547a2e7e76bd4456bafe35b3ffc299c45a52b9e951d32ddc733e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:41 np0005548915 podman[73958]: 2025-12-06 09:37:41.173046929 +0000 UTC m=+0.189236398 container init 5076c320e38e45a94f5fb7726329edcc2b8a7e5bff5175af100943d275cd2992 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 04:37:41 np0005548915 podman[73958]: 2025-12-06 09:37:41.179978193 +0000 UTC m=+0.196167642 container start 5076c320e38e45a94f5fb7726329edcc2b8a7e5bff5175af100943d275cd2992 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  6 04:37:41 np0005548915 bash[73958]: 5076c320e38e45a94f5fb7726329edcc2b8a7e5bff5175af100943d275cd2992
Dec  6 04:37:41 np0005548915 systemd[1]: Started Ceph mon.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: set uid:gid to 167:167 (ceph:ceph)
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: pidfile_write: ignore empty --pid-file
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: load: jerasure load: lrc 
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: RocksDB version: 7.9.2
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Git sha 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: DB SUMMARY
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: DB Session ID:  0ZQHI2PX756UQLPOWVHK
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: CURRENT file:  CURRENT
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: IDENTITY file:  IDENTITY
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                         Options.error_if_exists: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                       Options.create_if_missing: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                         Options.paranoid_checks: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                                     Options.env: 0x55dcf3a48c20
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                                Options.info_log: 0x55dcf5242d60
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                Options.max_file_opening_threads: 16
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                              Options.statistics: (nil)
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                               Options.use_fsync: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                       Options.max_log_file_size: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                         Options.allow_fallocate: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                        Options.use_direct_reads: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:          Options.create_missing_column_families: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                              Options.db_log_dir: 
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                                 Options.wal_dir: 
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                   Options.advise_random_on_open: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                    Options.write_buffer_manager: 0x55dcf5247900
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                            Options.rate_limiter: (nil)
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                  Options.unordered_write: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                               Options.row_cache: None
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                              Options.wal_filter: None
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.allow_ingest_behind: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.two_write_queues: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.manual_wal_flush: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.wal_compression: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.atomic_flush: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                 Options.log_readahead_size: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.allow_data_in_errors: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.db_host_id: __hostname__
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.max_background_jobs: 2
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.max_background_compactions: -1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.max_subcompactions: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.max_total_wal_size: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                          Options.max_open_files: -1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                          Options.bytes_per_sync: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:       Options.compaction_readahead_size: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                  Options.max_background_flushes: -1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Compression algorithms supported:
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: #011kZSTD supported: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: #011kXpressCompression supported: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: #011kBZip2Compression supported: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: #011kLZ4Compression supported: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: #011kZlibCompression supported: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: #011kSnappyCompression supported: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:           Options.merge_operator: 
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:        Options.compaction_filter: None
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dcf5242500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dcf5267350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:        Options.write_buffer_size: 33554432
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:  Options.max_write_buffer_number: 2
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:          Options.compression: NoCompression
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.num_levels: 7
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 423e8366-3852-4d2b-aa53-87abab31aff3
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013861239324, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013861241646, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "0ZQHI2PX756UQLPOWVHK", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013861241797, "job": 1, "event": "recovery_finished"}
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55dcf5268e00
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: DB pointer 0x55dcf5372000
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55dcf5267350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@-1(???) e0 preinit fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(probing) e0 win_standalone_election
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: log_channel(cluster) log [DBG] : fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: log_channel(cluster) log [DBG] : last_changed 2025-12-06T09:37:38.663870+0000
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: log_channel(cluster) log [DBG] : created 2025-12-06T09:37:38.663870+0000
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).mds e1 new map
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2025-12-06T09:37:41:285728+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: log_channel(cluster) log [DBG] : fsmap 
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mkfs 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec  6 04:37:41 np0005548915 podman[73978]: 2025-12-06 09:37:41.308004638 +0000 UTC m=+0.069797323 container create afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b (image=quay.io/ceph/ceph:v19, name=strange_joliot, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  6 04:37:41 np0005548915 systemd[1]: Started libpod-conmon-afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b.scope.
Dec  6 04:37:41 np0005548915 podman[73978]: 2025-12-06 09:37:41.283903505 +0000 UTC m=+0.045696210 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:41 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d489809cc4b9cef907666b23badca89098a2534987fb9d062d0ccca71d096c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d489809cc4b9cef907666b23badca89098a2534987fb9d062d0ccca71d096c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d489809cc4b9cef907666b23badca89098a2534987fb9d062d0ccca71d096c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:41 np0005548915 podman[73978]: 2025-12-06 09:37:41.430949377 +0000 UTC m=+0.192742112 container init afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b (image=quay.io/ceph/ceph:v19, name=strange_joliot, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:37:41 np0005548915 podman[73978]: 2025-12-06 09:37:41.44342531 +0000 UTC m=+0.205217965 container start afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b (image=quay.io/ceph/ceph:v19, name=strange_joliot, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:37:41 np0005548915 podman[73978]: 2025-12-06 09:37:41.447044266 +0000 UTC m=+0.208837021 container attach afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b (image=quay.io/ceph/ceph:v19, name=strange_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec  6 04:37:41 np0005548915 ceph-mon[73977]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501861568' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  6 04:37:41 np0005548915 strange_joliot[74032]:  cluster:
Dec  6 04:37:41 np0005548915 strange_joliot[74032]:    id:     5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec  6 04:37:41 np0005548915 strange_joliot[74032]:    health: HEALTH_OK
Dec  6 04:37:41 np0005548915 strange_joliot[74032]: 
Dec  6 04:37:41 np0005548915 strange_joliot[74032]:  services:
Dec  6 04:37:41 np0005548915 strange_joliot[74032]:    mon: 1 daemons, quorum compute-0 (age 0.403165s)
Dec  6 04:37:41 np0005548915 strange_joliot[74032]:    mgr: no daemons active
Dec  6 04:37:41 np0005548915 strange_joliot[74032]:    osd: 0 osds: 0 up, 0 in
Dec  6 04:37:41 np0005548915 strange_joliot[74032]: 
Dec  6 04:37:41 np0005548915 strange_joliot[74032]:  data:
Dec  6 04:37:41 np0005548915 strange_joliot[74032]:    pools:   0 pools, 0 pgs
Dec  6 04:37:41 np0005548915 strange_joliot[74032]:    objects: 0 objects, 0 B
Dec  6 04:37:41 np0005548915 strange_joliot[74032]:    usage:   0 B used, 0 B / 0 B avail
Dec  6 04:37:41 np0005548915 strange_joliot[74032]:    pgs:     
Dec  6 04:37:41 np0005548915 strange_joliot[74032]: 
Dec  6 04:37:41 np0005548915 systemd[1]: libpod-afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b.scope: Deactivated successfully.
Dec  6 04:37:41 np0005548915 podman[73978]: 2025-12-06 09:37:41.699931882 +0000 UTC m=+0.461724557 container died afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b (image=quay.io/ceph/ceph:v19, name=strange_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  6 04:37:41 np0005548915 podman[73978]: 2025-12-06 09:37:41.744165361 +0000 UTC m=+0.505958056 container remove afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b (image=quay.io/ceph/ceph:v19, name=strange_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:37:41 np0005548915 systemd[1]: libpod-conmon-afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b.scope: Deactivated successfully.
Dec  6 04:37:41 np0005548915 podman[74070]: 2025-12-06 09:37:41.845537665 +0000 UTC m=+0.066085574 container create 9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd (image=quay.io/ceph/ceph:v19, name=crazy_tu, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:37:41 np0005548915 systemd[1]: Started libpod-conmon-9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd.scope.
Dec  6 04:37:41 np0005548915 podman[74070]: 2025-12-06 09:37:41.8175882 +0000 UTC m=+0.038136139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:41 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1d9a737edb97c2a3bcbfb5f0a04c1871ec790227a3af0cbb7965ef7dcf54da/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1d9a737edb97c2a3bcbfb5f0a04c1871ec790227a3af0cbb7965ef7dcf54da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1d9a737edb97c2a3bcbfb5f0a04c1871ec790227a3af0cbb7965ef7dcf54da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1d9a737edb97c2a3bcbfb5f0a04c1871ec790227a3af0cbb7965ef7dcf54da/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:41 np0005548915 podman[74070]: 2025-12-06 09:37:41.938358221 +0000 UTC m=+0.158906200 container init 9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd (image=quay.io/ceph/ceph:v19, name=crazy_tu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  6 04:37:41 np0005548915 podman[74070]: 2025-12-06 09:37:41.94356371 +0000 UTC m=+0.164111639 container start 9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd (image=quay.io/ceph/ceph:v19, name=crazy_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec  6 04:37:41 np0005548915 podman[74070]: 2025-12-06 09:37:41.94773411 +0000 UTC m=+0.168282099 container attach 9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd (image=quay.io/ceph/ceph:v19, name=crazy_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:42 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  6 04:37:42 np0005548915 ceph-mon[73977]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/659844819' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  6 04:37:42 np0005548915 ceph-mon[73977]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/659844819' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  6 04:37:42 np0005548915 crazy_tu[74088]: 
Dec  6 04:37:42 np0005548915 crazy_tu[74088]: [global]
Dec  6 04:37:42 np0005548915 crazy_tu[74088]: #011fsid = 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec  6 04:37:42 np0005548915 crazy_tu[74088]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  6 04:37:42 np0005548915 systemd[1]: libpod-9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd.scope: Deactivated successfully.
Dec  6 04:37:42 np0005548915 podman[74070]: 2025-12-06 09:37:42.140152093 +0000 UTC m=+0.360699982 container died 9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd (image=quay.io/ceph/ceph:v19, name=crazy_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  6 04:37:42 np0005548915 systemd[1]: var-lib-containers-storage-overlay-cc1d9a737edb97c2a3bcbfb5f0a04c1871ec790227a3af0cbb7965ef7dcf54da-merged.mount: Deactivated successfully.
Dec  6 04:37:42 np0005548915 podman[74070]: 2025-12-06 09:37:42.177809377 +0000 UTC m=+0.398357266 container remove 9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd (image=quay.io/ceph/ceph:v19, name=crazy_tu, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec  6 04:37:42 np0005548915 systemd[1]: libpod-conmon-9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd.scope: Deactivated successfully.
Dec  6 04:37:42 np0005548915 podman[74125]: 2025-12-06 09:37:42.260046981 +0000 UTC m=+0.050224641 container create ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a (image=quay.io/ceph/ceph:v19, name=stoic_boyd, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:37:42 np0005548915 systemd[1]: Started libpod-conmon-ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a.scope.
Dec  6 04:37:42 np0005548915 ceph-mon[73977]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  6 04:37:42 np0005548915 ceph-mon[73977]: from='client.? 192.168.122.100:0/659844819' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  6 04:37:42 np0005548915 ceph-mon[73977]: from='client.? 192.168.122.100:0/659844819' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  6 04:37:42 np0005548915 podman[74125]: 2025-12-06 09:37:42.235505076 +0000 UTC m=+0.025682736 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:42 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:42 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e8cdfd11464fda0c66a843e54b7fada50deb59fc6e0c9a41177eaa7f2258e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:42 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e8cdfd11464fda0c66a843e54b7fada50deb59fc6e0c9a41177eaa7f2258e3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:42 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e8cdfd11464fda0c66a843e54b7fada50deb59fc6e0c9a41177eaa7f2258e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:42 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e8cdfd11464fda0c66a843e54b7fada50deb59fc6e0c9a41177eaa7f2258e3/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:42 np0005548915 podman[74125]: 2025-12-06 09:37:42.36873395 +0000 UTC m=+0.158911660 container init ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a (image=quay.io/ceph/ceph:v19, name=stoic_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  6 04:37:42 np0005548915 podman[74125]: 2025-12-06 09:37:42.375458039 +0000 UTC m=+0.165635679 container start ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a (image=quay.io/ceph/ceph:v19, name=stoic_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 04:37:42 np0005548915 podman[74125]: 2025-12-06 09:37:42.378847759 +0000 UTC m=+0.169025449 container attach ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a (image=quay.io/ceph/ceph:v19, name=stoic_boyd, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:37:42 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:37:42 np0005548915 ceph-mon[73977]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3429756443' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:37:42 np0005548915 systemd[1]: libpod-ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a.scope: Deactivated successfully.
Dec  6 04:37:42 np0005548915 podman[74125]: 2025-12-06 09:37:42.611004161 +0000 UTC m=+0.401181861 container died ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a (image=quay.io/ceph/ceph:v19, name=stoic_boyd, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:42 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d8e8cdfd11464fda0c66a843e54b7fada50deb59fc6e0c9a41177eaa7f2258e3-merged.mount: Deactivated successfully.
Dec  6 04:37:42 np0005548915 podman[74125]: 2025-12-06 09:37:42.675904652 +0000 UTC m=+0.466082292 container remove ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a (image=quay.io/ceph/ceph:v19, name=stoic_boyd, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  6 04:37:42 np0005548915 systemd[1]: libpod-conmon-ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a.scope: Deactivated successfully.
Dec  6 04:37:42 np0005548915 systemd[1]: Stopping Ceph mon.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:37:42 np0005548915 ceph-mon[73977]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  6 04:37:42 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  6 04:37:42 np0005548915 ceph-mon[73977]: mon.compute-0@0(leader) e1 shutdown
Dec  6 04:37:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0[73973]: 2025-12-06T09:37:42.970+0000 7fa609da7640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  6 04:37:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0[73973]: 2025-12-06T09:37:42.970+0000 7fa609da7640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  6 04:37:42 np0005548915 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  6 04:37:42 np0005548915 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  6 04:37:43 np0005548915 podman[74209]: 2025-12-06 09:37:43.063001907 +0000 UTC m=+0.136818391 container died 5076c320e38e45a94f5fb7726329edcc2b8a7e5bff5175af100943d275cd2992 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  6 04:37:43 np0005548915 systemd[1]: var-lib-containers-storage-overlay-e89e6ac69aa7547a2e7e76bd4456bafe35b3ffc299c45a52b9e951d32ddc733e-merged.mount: Deactivated successfully.
Dec  6 04:37:43 np0005548915 podman[74209]: 2025-12-06 09:37:43.108284004 +0000 UTC m=+0.182100478 container remove 5076c320e38e45a94f5fb7726329edcc2b8a7e5bff5175af100943d275cd2992 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 04:37:43 np0005548915 bash[74209]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0
Dec  6 04:37:43 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:37:43 np0005548915 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 04:37:43 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mon.compute-0.service: Deactivated successfully.
Dec  6 04:37:43 np0005548915 systemd[1]: Stopped Ceph mon.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:37:43 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mon.compute-0.service: Consumed 1.237s CPU time.
Dec  6 04:37:43 np0005548915 systemd[1]: Starting Ceph mon.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:37:43 np0005548915 podman[74308]: 2025-12-06 09:37:43.60225621 +0000 UTC m=+0.063131175 container create 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:37:43 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73705ce395615cb903ff96d5cb9c4336d3b38c2937ff2ff8887e0b7d3ca3f43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:43 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73705ce395615cb903ff96d5cb9c4336d3b38c2937ff2ff8887e0b7d3ca3f43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:43 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73705ce395615cb903ff96d5cb9c4336d3b38c2937ff2ff8887e0b7d3ca3f43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:43 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73705ce395615cb903ff96d5cb9c4336d3b38c2937ff2ff8887e0b7d3ca3f43/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:43 np0005548915 podman[74308]: 2025-12-06 09:37:43.57637481 +0000 UTC m=+0.037249845 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:43 np0005548915 podman[74308]: 2025-12-06 09:37:43.682830499 +0000 UTC m=+0.143705534 container init 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:43 np0005548915 podman[74308]: 2025-12-06 09:37:43.692008034 +0000 UTC m=+0.152883019 container start 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  6 04:37:43 np0005548915 bash[74308]: 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d
Dec  6 04:37:43 np0005548915 systemd[1]: Started Ceph mon.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: set uid:gid to 167:167 (ceph:ceph)
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: pidfile_write: ignore empty --pid-file
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: load: jerasure load: lrc 
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: RocksDB version: 7.9.2
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Git sha 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: DB SUMMARY
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: DB Session ID:  4WBX5WA2U4DRQ0QUUFCR
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: CURRENT file:  CURRENT
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: IDENTITY file:  IDENTITY
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58735 ; 
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                         Options.error_if_exists: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                       Options.create_if_missing: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                         Options.paranoid_checks: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                                     Options.env: 0x55fd97e60c20
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                                Options.info_log: 0x55fd9a54dac0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                Options.max_file_opening_threads: 16
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                              Options.statistics: (nil)
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                               Options.use_fsync: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                       Options.max_log_file_size: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                         Options.allow_fallocate: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                        Options.use_direct_reads: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:          Options.create_missing_column_families: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                              Options.db_log_dir: 
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                                 Options.wal_dir: 
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                   Options.advise_random_on_open: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                    Options.write_buffer_manager: 0x55fd9a551900
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                            Options.rate_limiter: (nil)
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                  Options.unordered_write: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                               Options.row_cache: None
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                              Options.wal_filter: None
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.allow_ingest_behind: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.two_write_queues: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.manual_wal_flush: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.wal_compression: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.atomic_flush: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                 Options.log_readahead_size: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.allow_data_in_errors: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.db_host_id: __hostname__
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.max_background_jobs: 2
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.max_background_compactions: -1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.max_subcompactions: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.max_total_wal_size: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                          Options.max_open_files: -1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                          Options.bytes_per_sync: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:       Options.compaction_readahead_size: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                  Options.max_background_flushes: -1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Compression algorithms supported:
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: #011kZSTD supported: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: #011kXpressCompression supported: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: #011kBZip2Compression supported: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: #011kLZ4Compression supported: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: #011kZlibCompression supported: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: #011kSnappyCompression supported: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:           Options.merge_operator: 
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:        Options.compaction_filter: None
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fd9a54caa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fd9a571350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:        Options.write_buffer_size: 33554432
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:  Options.max_write_buffer_number: 2
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:          Options.compression: NoCompression
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.num_levels: 7
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 423e8366-3852-4d2b-aa53-87abab31aff3
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013863753736, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013863761298, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58486, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56960, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54477, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013863, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013863761472, "job": 1, "event": "recovery_finished"}
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fd9a572e00
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: DB pointer 0x55fd9a67c000
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   59.01 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0   59.01 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 1.64 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 1.64 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fd9a571350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: mon.compute-0@-1(???) e1 preinit fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: mon.compute-0@-1(???).mds e1 new map
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2025-12-06T09:37:41:285728+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec  6 04:37:43 np0005548915 podman[74328]: 2025-12-06 09:37:43.799263924 +0000 UTC m=+0.060418112 container create b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c (image=quay.io/ceph/ceph:v19, name=eloquent_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : last_changed 2025-12-06T09:37:38.663870+0000
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : created 2025-12-06T09:37:38.663870+0000
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap 
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  6 04:37:43 np0005548915 systemd[1]: Started libpod-conmon-b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c.scope.
Dec  6 04:37:43 np0005548915 podman[74328]: 2025-12-06 09:37:43.775662465 +0000 UTC m=+0.036816753 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:43 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:43 np0005548915 ceph-mon[74327]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  6 04:37:43 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb2eb74b68edc622ecbe8e8a0107af83324ff1c9aee09d8e3efe5044a040acc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:43 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb2eb74b68edc622ecbe8e8a0107af83324ff1c9aee09d8e3efe5044a040acc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:43 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb2eb74b68edc622ecbe8e8a0107af83324ff1c9aee09d8e3efe5044a040acc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:43 np0005548915 podman[74328]: 2025-12-06 09:37:43.90856622 +0000 UTC m=+0.169720418 container init b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c (image=quay.io/ceph/ceph:v19, name=eloquent_merkle, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:37:43 np0005548915 podman[74328]: 2025-12-06 09:37:43.918572217 +0000 UTC m=+0.179726405 container start b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c (image=quay.io/ceph/ceph:v19, name=eloquent_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  6 04:37:43 np0005548915 podman[74328]: 2025-12-06 09:37:43.927053113 +0000 UTC m=+0.188207301 container attach b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c (image=quay.io/ceph/ceph:v19, name=eloquent_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 04:37:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec  6 04:37:44 np0005548915 systemd[1]: libpod-b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c.scope: Deactivated successfully.
Dec  6 04:37:44 np0005548915 podman[74328]: 2025-12-06 09:37:44.191429934 +0000 UTC m=+0.452584132 container died b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c (image=quay.io/ceph/ceph:v19, name=eloquent_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  6 04:37:44 np0005548915 systemd[1]: var-lib-containers-storage-overlay-feb2eb74b68edc622ecbe8e8a0107af83324ff1c9aee09d8e3efe5044a040acc-merged.mount: Deactivated successfully.
Dec  6 04:37:44 np0005548915 podman[74328]: 2025-12-06 09:37:44.255651237 +0000 UTC m=+0.516805455 container remove b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c (image=quay.io/ceph/ceph:v19, name=eloquent_merkle, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  6 04:37:44 np0005548915 systemd[1]: libpod-conmon-b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c.scope: Deactivated successfully.
Dec  6 04:37:44 np0005548915 podman[74420]: 2025-12-06 09:37:44.328614083 +0000 UTC m=+0.051323329 container create 6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699 (image=quay.io/ceph/ceph:v19, name=pedantic_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  6 04:37:44 np0005548915 systemd[1]: Started libpod-conmon-6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699.scope.
Dec  6 04:37:44 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:44 np0005548915 podman[74420]: 2025-12-06 09:37:44.301897511 +0000 UTC m=+0.024606757 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137489bf9721a2bc912490ef390fe1830c08a2ae02f3442f3b493f0eb1e59bfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137489bf9721a2bc912490ef390fe1830c08a2ae02f3442f3b493f0eb1e59bfa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137489bf9721a2bc912490ef390fe1830c08a2ae02f3442f3b493f0eb1e59bfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:44 np0005548915 podman[74420]: 2025-12-06 09:37:44.416802306 +0000 UTC m=+0.139511602 container init 6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699 (image=quay.io/ceph/ceph:v19, name=pedantic_ptolemy, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:37:44 np0005548915 podman[74420]: 2025-12-06 09:37:44.427307345 +0000 UTC m=+0.150016591 container start 6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699 (image=quay.io/ceph/ceph:v19, name=pedantic_ptolemy, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:37:44 np0005548915 podman[74420]: 2025-12-06 09:37:44.431684402 +0000 UTC m=+0.154393658 container attach 6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699 (image=quay.io/ceph/ceph:v19, name=pedantic_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 04:37:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec  6 04:37:44 np0005548915 systemd[1]: libpod-6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699.scope: Deactivated successfully.
Dec  6 04:37:44 np0005548915 podman[74420]: 2025-12-06 09:37:44.712589744 +0000 UTC m=+0.435298980 container died 6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699 (image=quay.io/ceph/ceph:v19, name=pedantic_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  6 04:37:45 np0005548915 systemd[1]: var-lib-containers-storage-overlay-137489bf9721a2bc912490ef390fe1830c08a2ae02f3442f3b493f0eb1e59bfa-merged.mount: Deactivated successfully.
Dec  6 04:37:45 np0005548915 podman[74420]: 2025-12-06 09:37:45.17336025 +0000 UTC m=+0.896069496 container remove 6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699 (image=quay.io/ceph/ceph:v19, name=pedantic_ptolemy, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  6 04:37:45 np0005548915 systemd[1]: libpod-conmon-6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699.scope: Deactivated successfully.
Dec  6 04:37:45 np0005548915 systemd[1]: Reloading.
Dec  6 04:37:45 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:37:45 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:37:45 np0005548915 systemd[1]: Reloading.
Dec  6 04:37:45 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:37:45 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:37:45 np0005548915 systemd[1]: Starting Ceph mgr.compute-0.qhdjwa for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:37:46 np0005548915 podman[74599]: 2025-12-06 09:37:46.221675265 +0000 UTC m=+0.083997003 container create 815d2c9c324f0034e21122d212c9b39b8cfbd265220b3170dc0ddc482fd85aa9 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:46 np0005548915 podman[74599]: 2025-12-06 09:37:46.186895464 +0000 UTC m=+0.049217252 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1415217fe1ab4fc45c3f2163d9ec7fdac44343257a341b013f55f3f758333a01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1415217fe1ab4fc45c3f2163d9ec7fdac44343257a341b013f55f3f758333a01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1415217fe1ab4fc45c3f2163d9ec7fdac44343257a341b013f55f3f758333a01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1415217fe1ab4fc45c3f2163d9ec7fdac44343257a341b013f55f3f758333a01/merged/var/lib/ceph/mgr/ceph-compute-0.qhdjwa supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:46 np0005548915 podman[74599]: 2025-12-06 09:37:46.309044803 +0000 UTC m=+0.171366561 container init 815d2c9c324f0034e21122d212c9b39b8cfbd265220b3170dc0ddc482fd85aa9 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:37:46 np0005548915 podman[74599]: 2025-12-06 09:37:46.318338204 +0000 UTC m=+0.180659932 container start 815d2c9c324f0034e21122d212c9b39b8cfbd265220b3170dc0ddc482fd85aa9 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:37:46 np0005548915 ceph-mgr[74618]: set uid:gid to 167:167 (ceph:ceph)
Dec  6 04:37:46 np0005548915 ceph-mgr[74618]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  6 04:37:46 np0005548915 ceph-mgr[74618]: pidfile_write: ignore empty --pid-file
Dec  6 04:37:46 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'alerts'
Dec  6 04:37:46 np0005548915 ceph-mgr[74618]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  6 04:37:46 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'balancer'
Dec  6 04:37:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:46.493+0000 7ff0866c5140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  6 04:37:46 np0005548915 ceph-mgr[74618]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  6 04:37:46 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'cephadm'
Dec  6 04:37:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:46.576+0000 7ff0866c5140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  6 04:37:46 np0005548915 bash[74599]: 815d2c9c324f0034e21122d212c9b39b8cfbd265220b3170dc0ddc482fd85aa9
Dec  6 04:37:46 np0005548915 systemd[1]: Started Ceph mgr.compute-0.qhdjwa for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:37:46 np0005548915 podman[74639]: 2025-12-06 09:37:46.829159541 +0000 UTC m=+0.071440218 container create b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0 (image=quay.io/ceph/ceph:v19, name=pensive_matsumoto, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Dec  6 04:37:46 np0005548915 systemd[1]: Started libpod-conmon-b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0.scope.
Dec  6 04:37:46 np0005548915 podman[74639]: 2025-12-06 09:37:46.797715937 +0000 UTC m=+0.039996674 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:46 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2844f23ba2d584e0ea59ab2a0fb3925aad142b587f370bb36d43abc82ef14d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2844f23ba2d584e0ea59ab2a0fb3925aad142b587f370bb36d43abc82ef14d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2844f23ba2d584e0ea59ab2a0fb3925aad142b587f370bb36d43abc82ef14d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:46 np0005548915 podman[74639]: 2025-12-06 09:37:46.948117337 +0000 UTC m=+0.190398024 container init b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0 (image=quay.io/ceph/ceph:v19, name=pensive_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  6 04:37:46 np0005548915 podman[74639]: 2025-12-06 09:37:46.960233864 +0000 UTC m=+0.202514511 container start b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0 (image=quay.io/ceph/ceph:v19, name=pensive_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:37:46 np0005548915 podman[74639]: 2025-12-06 09:37:46.964098009 +0000 UTC m=+0.206378686 container attach b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0 (image=quay.io/ceph/ceph:v19, name=pensive_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:37:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  6 04:37:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2723740387' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]: 
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]: {
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    "fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    "health": {
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "status": "HEALTH_OK",
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "checks": {},
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "mutes": []
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    },
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    "election_epoch": 5,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    "quorum": [
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        0
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    ],
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    "quorum_names": [
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "compute-0"
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    ],
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    "quorum_age": 3,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    "monmap": {
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "epoch": 1,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "min_mon_release_name": "squid",
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "num_mons": 1
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    },
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    "osdmap": {
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "epoch": 1,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "num_osds": 0,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "num_up_osds": 0,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "osd_up_since": 0,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "num_in_osds": 0,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "osd_in_since": 0,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "num_remapped_pgs": 0
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    },
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    "pgmap": {
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "pgs_by_state": [],
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "num_pgs": 0,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "num_pools": 0,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "num_objects": 0,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "data_bytes": 0,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "bytes_used": 0,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "bytes_avail": 0,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "bytes_total": 0
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    },
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    "fsmap": {
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "epoch": 1,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "btime": "2025-12-06T09:37:41:285728+0000",
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "by_rank": [],
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "up:standby": 0
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    },
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    "mgrmap": {
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "available": false,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "num_standbys": 0,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "modules": [
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:            "iostat",
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:            "nfs",
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:            "restful"
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        ],
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "services": {}
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    },
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    "servicemap": {
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "epoch": 1,
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "modified": "2025-12-06T09:37:41.289249+0000",
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:        "services": {}
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    },
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]:    "progress_events": {}
Dec  6 04:37:47 np0005548915 pensive_matsumoto[74653]: }
Dec  6 04:37:47 np0005548915 systemd[1]: libpod-b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0.scope: Deactivated successfully.
Dec  6 04:37:47 np0005548915 podman[74690]: 2025-12-06 09:37:47.229899006 +0000 UTC m=+0.024712364 container died b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0 (image=quay.io/ceph/ceph:v19, name=pensive_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  6 04:37:47 np0005548915 systemd[1]: var-lib-containers-storage-overlay-bf2844f23ba2d584e0ea59ab2a0fb3925aad142b587f370bb36d43abc82ef14d-merged.mount: Deactivated successfully.
Dec  6 04:37:47 np0005548915 podman[74690]: 2025-12-06 09:37:47.26544217 +0000 UTC m=+0.060255508 container remove b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0 (image=quay.io/ceph/ceph:v19, name=pensive_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:37:47 np0005548915 systemd[1]: libpod-conmon-b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0.scope: Deactivated successfully.
Dec  6 04:37:47 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'crash'
Dec  6 04:37:47 np0005548915 ceph-mgr[74618]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  6 04:37:47 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'dashboard'
Dec  6 04:37:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:47.433+0000 7ff0866c5140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  6 04:37:47 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'devicehealth'
Dec  6 04:37:48 np0005548915 ceph-mgr[74618]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  6 04:37:48 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'diskprediction_local'
Dec  6 04:37:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:48.060+0000 7ff0866c5140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  6 04:37:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  6 04:37:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  6 04:37:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:  from numpy import show_config as show_numpy_config
Dec  6 04:37:48 np0005548915 ceph-mgr[74618]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  6 04:37:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:48.248+0000 7ff0866c5140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  6 04:37:48 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'influx'
Dec  6 04:37:48 np0005548915 ceph-mgr[74618]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  6 04:37:48 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'insights'
Dec  6 04:37:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:48.326+0000 7ff0866c5140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  6 04:37:48 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'iostat'
Dec  6 04:37:48 np0005548915 ceph-mgr[74618]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  6 04:37:48 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'k8sevents'
Dec  6 04:37:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:48.467+0000 7ff0866c5140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  6 04:37:48 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'localpool'
Dec  6 04:37:48 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'mds_autoscaler'
Dec  6 04:37:49 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'mirroring'
Dec  6 04:37:49 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'nfs'
Dec  6 04:37:49 np0005548915 podman[74705]: 2025-12-06 09:37:49.366766733 +0000 UTC m=+0.066914530 container create 805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96 (image=quay.io/ceph/ceph:v19, name=silly_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 04:37:49 np0005548915 systemd[1]: Started libpod-conmon-805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96.scope.
Dec  6 04:37:49 np0005548915 podman[74705]: 2025-12-06 09:37:49.330542104 +0000 UTC m=+0.030690011 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:49 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:49 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d171790ebdd09ee65b482d884665de15a01afb1a0fb0be49dec20c728d093e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:49 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d171790ebdd09ee65b482d884665de15a01afb1a0fb0be49dec20c728d093e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:49 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d171790ebdd09ee65b482d884665de15a01afb1a0fb0be49dec20c728d093e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:49 np0005548915 ceph-mgr[74618]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  6 04:37:49 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'orchestrator'
Dec  6 04:37:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:49.449+0000 7ff0866c5140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  6 04:37:49 np0005548915 podman[74705]: 2025-12-06 09:37:49.47253188 +0000 UTC m=+0.172679777 container init 805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96 (image=quay.io/ceph/ceph:v19, name=silly_newton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 04:37:49 np0005548915 podman[74705]: 2025-12-06 09:37:49.479120899 +0000 UTC m=+0.179268736 container start 805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96 (image=quay.io/ceph/ceph:v19, name=silly_newton, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:49 np0005548915 podman[74705]: 2025-12-06 09:37:49.483869701 +0000 UTC m=+0.184017598 container attach 805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96 (image=quay.io/ceph/ceph:v19, name=silly_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 04:37:49 np0005548915 ceph-mgr[74618]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  6 04:37:49 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'osd_perf_query'
Dec  6 04:37:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:49.671+0000 7ff0866c5140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  6 04:37:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  6 04:37:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/75919033' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  6 04:37:49 np0005548915 silly_newton[74721]: 
Dec  6 04:37:49 np0005548915 silly_newton[74721]: {
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    "fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    "health": {
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "status": "HEALTH_OK",
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "checks": {},
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "mutes": []
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    },
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    "election_epoch": 5,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    "quorum": [
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        0
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    ],
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    "quorum_names": [
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "compute-0"
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    ],
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    "quorum_age": 5,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    "monmap": {
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "epoch": 1,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "min_mon_release_name": "squid",
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "num_mons": 1
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    },
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    "osdmap": {
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "epoch": 1,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "num_osds": 0,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "num_up_osds": 0,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "osd_up_since": 0,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "num_in_osds": 0,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "osd_in_since": 0,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "num_remapped_pgs": 0
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    },
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    "pgmap": {
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "pgs_by_state": [],
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "num_pgs": 0,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "num_pools": 0,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "num_objects": 0,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "data_bytes": 0,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "bytes_used": 0,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "bytes_avail": 0,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "bytes_total": 0
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    },
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    "fsmap": {
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "epoch": 1,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "btime": "2025-12-06T09:37:41:285728+0000",
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "by_rank": [],
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "up:standby": 0
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    },
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    "mgrmap": {
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "available": false,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "num_standbys": 0,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "modules": [
Dec  6 04:37:49 np0005548915 silly_newton[74721]:            "iostat",
Dec  6 04:37:49 np0005548915 silly_newton[74721]:            "nfs",
Dec  6 04:37:49 np0005548915 silly_newton[74721]:            "restful"
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        ],
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "services": {}
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    },
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    "servicemap": {
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "epoch": 1,
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "modified": "2025-12-06T09:37:41.289249+0000",
Dec  6 04:37:49 np0005548915 silly_newton[74721]:        "services": {}
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    },
Dec  6 04:37:49 np0005548915 silly_newton[74721]:    "progress_events": {}
Dec  6 04:37:49 np0005548915 silly_newton[74721]: }
Dec  6 04:37:49 np0005548915 systemd[1]: libpod-805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96.scope: Deactivated successfully.
Dec  6 04:37:49 np0005548915 conmon[74721]: conmon 805b29a0a6981d54527d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96.scope/container/memory.events
Dec  6 04:37:49 np0005548915 podman[74705]: 2025-12-06 09:37:49.711588474 +0000 UTC m=+0.411736341 container died 805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96 (image=quay.io/ceph/ceph:v19, name=silly_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:37:49 np0005548915 ceph-mgr[74618]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  6 04:37:49 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'osd_support'
Dec  6 04:37:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:49.747+0000 7ff0866c5140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  6 04:37:49 np0005548915 systemd[1]: var-lib-containers-storage-overlay-20d171790ebdd09ee65b482d884665de15a01afb1a0fb0be49dec20c728d093e-merged.mount: Deactivated successfully.
Dec  6 04:37:49 np0005548915 podman[74705]: 2025-12-06 09:37:49.799447631 +0000 UTC m=+0.499595468 container remove 805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96 (image=quay.io/ceph/ceph:v19, name=silly_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  6 04:37:49 np0005548915 systemd[1]: libpod-conmon-805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96.scope: Deactivated successfully.
Dec  6 04:37:49 np0005548915 ceph-mgr[74618]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  6 04:37:49 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'pg_autoscaler'
Dec  6 04:37:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:49.818+0000 7ff0866c5140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  6 04:37:49 np0005548915 ceph-mgr[74618]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  6 04:37:49 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'progress'
Dec  6 04:37:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:49.900+0000 7ff0866c5140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  6 04:37:49 np0005548915 ceph-mgr[74618]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  6 04:37:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:49.968+0000 7ff0866c5140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  6 04:37:49 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'prometheus'
Dec  6 04:37:50 np0005548915 ceph-mgr[74618]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  6 04:37:50 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rbd_support'
Dec  6 04:37:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:50.302+0000 7ff0866c5140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  6 04:37:50 np0005548915 ceph-mgr[74618]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  6 04:37:50 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'restful'
Dec  6 04:37:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:50.391+0000 7ff0866c5140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  6 04:37:50 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rgw'
Dec  6 04:37:50 np0005548915 ceph-mgr[74618]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  6 04:37:50 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rook'
Dec  6 04:37:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:50.826+0000 7ff0866c5140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  6 04:37:51 np0005548915 ceph-mgr[74618]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  6 04:37:51 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'selftest'
Dec  6 04:37:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:51.376+0000 7ff0866c5140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  6 04:37:51 np0005548915 ceph-mgr[74618]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  6 04:37:51 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'snap_schedule'
Dec  6 04:37:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:51.446+0000 7ff0866c5140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  6 04:37:51 np0005548915 ceph-mgr[74618]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  6 04:37:51 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'stats'
Dec  6 04:37:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:51.527+0000 7ff0866c5140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  6 04:37:51 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'status'
Dec  6 04:37:51 np0005548915 ceph-mgr[74618]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  6 04:37:51 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'telegraf'
Dec  6 04:37:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:51.669+0000 7ff0866c5140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  6 04:37:51 np0005548915 ceph-mgr[74618]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  6 04:37:51 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'telemetry'
Dec  6 04:37:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:51.732+0000 7ff0866c5140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  6 04:37:51 np0005548915 podman[74761]: 2025-12-06 09:37:51.877914056 +0000 UTC m=+0.049044090 container create e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd (image=quay.io/ceph/ceph:v19, name=adoring_burnell, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 04:37:51 np0005548915 ceph-mgr[74618]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  6 04:37:51 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'test_orchestrator'
Dec  6 04:37:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:51.891+0000 7ff0866c5140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  6 04:37:51 np0005548915 systemd[1]: Started libpod-conmon-e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd.scope.
Dec  6 04:37:51 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:51 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02482d6fa160d70444dc56b743a8ab57b88d8f6ab9034a3eb194008b09c5e16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:51 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02482d6fa160d70444dc56b743a8ab57b88d8f6ab9034a3eb194008b09c5e16/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:51 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02482d6fa160d70444dc56b743a8ab57b88d8f6ab9034a3eb194008b09c5e16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:51 np0005548915 podman[74761]: 2025-12-06 09:37:51.857740192 +0000 UTC m=+0.028870246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:51 np0005548915 podman[74761]: 2025-12-06 09:37:51.959030932 +0000 UTC m=+0.130160996 container init e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd (image=quay.io/ceph/ceph:v19, name=adoring_burnell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  6 04:37:51 np0005548915 podman[74761]: 2025-12-06 09:37:51.971874953 +0000 UTC m=+0.143005027 container start e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd (image=quay.io/ceph/ceph:v19, name=adoring_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  6 04:37:51 np0005548915 podman[74761]: 2025-12-06 09:37:51.976654346 +0000 UTC m=+0.147784410 container attach e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd (image=quay.io/ceph/ceph:v19, name=adoring_burnell, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'volumes'
Dec  6 04:37:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:52.105+0000 7ff0866c5140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4156880945' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]: 
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]: {
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    "fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    "health": {
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "status": "HEALTH_OK",
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "checks": {},
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "mutes": []
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    },
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    "election_epoch": 5,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    "quorum": [
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        0
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    ],
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    "quorum_names": [
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "compute-0"
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    ],
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    "quorum_age": 8,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    "monmap": {
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "epoch": 1,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "min_mon_release_name": "squid",
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "num_mons": 1
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    },
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    "osdmap": {
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "epoch": 1,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "num_osds": 0,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "num_up_osds": 0,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "osd_up_since": 0,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "num_in_osds": 0,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "osd_in_since": 0,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "num_remapped_pgs": 0
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    },
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    "pgmap": {
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "pgs_by_state": [],
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "num_pgs": 0,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "num_pools": 0,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "num_objects": 0,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "data_bytes": 0,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "bytes_used": 0,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "bytes_avail": 0,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "bytes_total": 0
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    },
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    "fsmap": {
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "epoch": 1,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "btime": "2025-12-06T09:37:41:285728+0000",
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "by_rank": [],
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "up:standby": 0
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    },
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    "mgrmap": {
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "available": false,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "num_standbys": 0,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "modules": [
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:            "iostat",
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:            "nfs",
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:            "restful"
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        ],
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "services": {}
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    },
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    "servicemap": {
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "epoch": 1,
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "modified": "2025-12-06T09:37:41.289249+0000",
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:        "services": {}
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    },
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]:    "progress_events": {}
Dec  6 04:37:52 np0005548915 adoring_burnell[74777]: }
Dec  6 04:37:52 np0005548915 systemd[1]: libpod-e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd.scope: Deactivated successfully.
Dec  6 04:37:52 np0005548915 podman[74761]: 2025-12-06 09:37:52.244408821 +0000 UTC m=+0.415538875 container died e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd (image=quay.io/ceph/ceph:v19, name=adoring_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  6 04:37:52 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f02482d6fa160d70444dc56b743a8ab57b88d8f6ab9034a3eb194008b09c5e16-merged.mount: Deactivated successfully.
Dec  6 04:37:52 np0005548915 podman[74761]: 2025-12-06 09:37:52.287216198 +0000 UTC m=+0.458346242 container remove e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd (image=quay.io/ceph/ceph:v19, name=adoring_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:37:52 np0005548915 systemd[1]: libpod-conmon-e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd.scope: Deactivated successfully.
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'zabbix'
Dec  6 04:37:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:52.362+0000 7ff0866c5140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  6 04:37:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:52.427+0000 7ff0866c5140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: ms_deliver_dispatch: unhandled message 0x559af7a969c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qhdjwa
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.qhdjwa(active, starting, since 0.0100015s)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr handle_mgr_map Activating!
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr handle_mgr_map I am now activating
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e1 all = 1
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"} v 0)
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"}]: dispatch
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: balancer
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [balancer INFO root] Starting
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:37:52
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [balancer INFO root] No pools available
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Manager daemon compute-0.qhdjwa is now available
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: crash
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: devicehealth
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [devicehealth INFO root] Starting
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: iostat
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: nfs
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: orchestrator
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: pg_autoscaler
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: progress
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [progress INFO root] Loading...
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [progress INFO root] No stored events to load
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [progress INFO root] Loaded [] historic events
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [progress INFO root] Loaded OSDMap, ready.
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] recovery thread starting
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] starting setup
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: rbd_support
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: restful
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [restful INFO root] server_addr: :: server_port: 8003
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"} v 0)
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: status
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [restful WARNING root] server not running: no certificate configured
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: telemetry
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] PerfHandler: starting
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TaskHandler: starting
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"} v 0)
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] setup complete
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec  6 04:37:52 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: volumes
Dec  6 04:37:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:37:53 np0005548915 ceph-mon[74327]: Activating manager daemon compute-0.qhdjwa
Dec  6 04:37:53 np0005548915 ceph-mon[74327]: Manager daemon compute-0.qhdjwa is now available
Dec  6 04:37:53 np0005548915 ceph-mon[74327]: from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec  6 04:37:53 np0005548915 ceph-mon[74327]: from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec  6 04:37:53 np0005548915 ceph-mon[74327]: from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:37:53 np0005548915 ceph-mon[74327]: from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:37:53 np0005548915 ceph-mon[74327]: from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:37:53 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.qhdjwa(active, since 1.03065s)
Dec  6 04:37:54 np0005548915 podman[74896]: 2025-12-06 09:37:54.39464137 +0000 UTC m=+0.071691502 container create 966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3 (image=quay.io/ceph/ceph:v19, name=stupefied_taussig, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:54 np0005548915 systemd[1]: Started libpod-conmon-966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3.scope.
Dec  6 04:37:54 np0005548915 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  6 04:37:54 np0005548915 podman[74896]: 2025-12-06 09:37:54.367044631 +0000 UTC m=+0.044094883 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:54 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:54 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.qhdjwa(active, since 2s)
Dec  6 04:37:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad605031841da3a341bedcf05ce9a210bc986ba9dbce21c83b654fc2a48e2641/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad605031841da3a341bedcf05ce9a210bc986ba9dbce21c83b654fc2a48e2641/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad605031841da3a341bedcf05ce9a210bc986ba9dbce21c83b654fc2a48e2641/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:54 np0005548915 podman[74896]: 2025-12-06 09:37:54.48876022 +0000 UTC m=+0.165810352 container init 966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3 (image=quay.io/ceph/ceph:v19, name=stupefied_taussig, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:37:54 np0005548915 podman[74896]: 2025-12-06 09:37:54.495167815 +0000 UTC m=+0.172217947 container start 966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3 (image=quay.io/ceph/ceph:v19, name=stupefied_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:37:54 np0005548915 podman[74896]: 2025-12-06 09:37:54.49899572 +0000 UTC m=+0.176045852 container attach 966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3 (image=quay.io/ceph/ceph:v19, name=stupefied_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:37:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  6 04:37:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1286987495' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]: 
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]: {
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    "fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    "health": {
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "status": "HEALTH_OK",
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "checks": {},
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "mutes": []
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    },
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    "election_epoch": 5,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    "quorum": [
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        0
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    ],
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    "quorum_names": [
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "compute-0"
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    ],
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    "quorum_age": 11,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    "monmap": {
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "epoch": 1,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "min_mon_release_name": "squid",
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "num_mons": 1
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    },
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    "osdmap": {
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "epoch": 1,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "num_osds": 0,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "num_up_osds": 0,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "osd_up_since": 0,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "num_in_osds": 0,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "osd_in_since": 0,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "num_remapped_pgs": 0
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    },
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    "pgmap": {
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "pgs_by_state": [],
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "num_pgs": 0,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "num_pools": 0,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "num_objects": 0,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "data_bytes": 0,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "bytes_used": 0,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "bytes_avail": 0,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "bytes_total": 0
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    },
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    "fsmap": {
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "epoch": 1,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "btime": "2025-12-06T09:37:41:285728+0000",
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "by_rank": [],
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "up:standby": 0
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    },
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    "mgrmap": {
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "available": true,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "num_standbys": 0,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "modules": [
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:            "iostat",
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:            "nfs",
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:            "restful"
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        ],
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "services": {}
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    },
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    "servicemap": {
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "epoch": 1,
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "modified": "2025-12-06T09:37:41.289249+0000",
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:        "services": {}
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    },
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]:    "progress_events": {}
Dec  6 04:37:54 np0005548915 stupefied_taussig[74912]: }
Dec  6 04:37:54 np0005548915 systemd[1]: libpod-966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3.scope: Deactivated successfully.
Dec  6 04:37:54 np0005548915 podman[74896]: 2025-12-06 09:37:54.933321041 +0000 UTC m=+0.610371193 container died 966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3 (image=quay.io/ceph/ceph:v19, name=stupefied_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  6 04:37:54 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ad605031841da3a341bedcf05ce9a210bc986ba9dbce21c83b654fc2a48e2641-merged.mount: Deactivated successfully.
Dec  6 04:37:54 np0005548915 podman[74896]: 2025-12-06 09:37:54.981271778 +0000 UTC m=+0.658321900 container remove 966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3 (image=quay.io/ceph/ceph:v19, name=stupefied_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:37:54 np0005548915 systemd[1]: libpod-conmon-966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3.scope: Deactivated successfully.
Dec  6 04:37:55 np0005548915 podman[74951]: 2025-12-06 09:37:55.067703628 +0000 UTC m=+0.056810192 container create e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32 (image=quay.io/ceph/ceph:v19, name=pensive_newton, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 04:37:55 np0005548915 systemd[1]: Started libpod-conmon-e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32.scope.
Dec  6 04:37:55 np0005548915 podman[74951]: 2025-12-06 09:37:55.039999796 +0000 UTC m=+0.029106400 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:55 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257c4c109da7f3b7b13b8f80524ff845a7a694b789ae5ffd583e994efa515fff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257c4c109da7f3b7b13b8f80524ff845a7a694b789ae5ffd583e994efa515fff/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257c4c109da7f3b7b13b8f80524ff845a7a694b789ae5ffd583e994efa515fff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257c4c109da7f3b7b13b8f80524ff845a7a694b789ae5ffd583e994efa515fff/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:55 np0005548915 podman[74951]: 2025-12-06 09:37:55.164312076 +0000 UTC m=+0.153418650 container init e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32 (image=quay.io/ceph/ceph:v19, name=pensive_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  6 04:37:55 np0005548915 podman[74951]: 2025-12-06 09:37:55.17164588 +0000 UTC m=+0.160752414 container start e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32 (image=quay.io/ceph/ceph:v19, name=pensive_newton, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:37:55 np0005548915 podman[74951]: 2025-12-06 09:37:55.176572996 +0000 UTC m=+0.165679530 container attach e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32 (image=quay.io/ceph/ceph:v19, name=pensive_newton, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  6 04:37:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  6 04:37:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/684219841' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  6 04:37:55 np0005548915 pensive_newton[74968]: 
Dec  6 04:37:55 np0005548915 pensive_newton[74968]: [global]
Dec  6 04:37:55 np0005548915 pensive_newton[74968]: #011fsid = 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec  6 04:37:55 np0005548915 pensive_newton[74968]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  6 04:37:55 np0005548915 systemd[1]: libpod-e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32.scope: Deactivated successfully.
Dec  6 04:37:55 np0005548915 podman[74951]: 2025-12-06 09:37:55.574165448 +0000 UTC m=+0.563272022 container died e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32 (image=quay.io/ceph/ceph:v19, name=pensive_newton, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  6 04:37:55 np0005548915 systemd[1]: var-lib-containers-storage-overlay-257c4c109da7f3b7b13b8f80524ff845a7a694b789ae5ffd583e994efa515fff-merged.mount: Deactivated successfully.
Dec  6 04:37:55 np0005548915 podman[74951]: 2025-12-06 09:37:55.621705548 +0000 UTC m=+0.610812102 container remove e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32 (image=quay.io/ceph/ceph:v19, name=pensive_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:37:55 np0005548915 systemd[1]: libpod-conmon-e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32.scope: Deactivated successfully.
Dec  6 04:37:55 np0005548915 podman[75006]: 2025-12-06 09:37:55.681166591 +0000 UTC m=+0.038276720 container create 6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c (image=quay.io/ceph/ceph:v19, name=interesting_montalcini, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:37:55 np0005548915 systemd[1]: Started libpod-conmon-6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c.scope.
Dec  6 04:37:55 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff7a32fddf080b5d996485a844d84732b3324a2b20818c13f4c6b74d94b4c0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff7a32fddf080b5d996485a844d84732b3324a2b20818c13f4c6b74d94b4c0a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff7a32fddf080b5d996485a844d84732b3324a2b20818c13f4c6b74d94b4c0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:55 np0005548915 podman[75006]: 2025-12-06 09:37:55.662167289 +0000 UTC m=+0.019277458 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:55 np0005548915 podman[75006]: 2025-12-06 09:37:55.777668597 +0000 UTC m=+0.134778766 container init 6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c (image=quay.io/ceph/ceph:v19, name=interesting_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Dec  6 04:37:55 np0005548915 podman[75006]: 2025-12-06 09:37:55.786585561 +0000 UTC m=+0.143695720 container start 6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c (image=quay.io/ceph/ceph:v19, name=interesting_montalcini, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:55 np0005548915 podman[75006]: 2025-12-06 09:37:55.79057249 +0000 UTC m=+0.147682659 container attach 6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c (image=quay.io/ceph/ceph:v19, name=interesting_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  6 04:37:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec  6 04:37:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1328164209' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  6 04:37:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1328164209' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr respawn  1: '-n'
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr respawn  2: 'mgr.compute-0.qhdjwa'
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr respawn  3: '-f'
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr respawn  4: '--setuser'
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr respawn  5: 'ceph'
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr respawn  6: '--setgroup'
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr respawn  7: 'ceph'
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr respawn  8: '--default-log-to-file=false'
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr respawn  9: '--default-log-to-journald=true'
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr respawn  exe_path /proc/self/exe
Dec  6 04:37:56 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.qhdjwa(active, since 4s)
Dec  6 04:37:56 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/684219841' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  6 04:37:56 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1328164209' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  6 04:37:56 np0005548915 systemd[1]: libpod-6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c.scope: Deactivated successfully.
Dec  6 04:37:56 np0005548915 podman[75006]: 2025-12-06 09:37:56.513401001 +0000 UTC m=+0.870511160 container died 6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c (image=quay.io/ceph/ceph:v19, name=interesting_montalcini, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:37:56 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6ff7a32fddf080b5d996485a844d84732b3324a2b20818c13f4c6b74d94b4c0a-merged.mount: Deactivated successfully.
Dec  6 04:37:56 np0005548915 podman[75006]: 2025-12-06 09:37:56.564209804 +0000 UTC m=+0.921319953 container remove 6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c (image=quay.io/ceph/ceph:v19, name=interesting_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  6 04:37:56 np0005548915 systemd[1]: libpod-conmon-6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c.scope: Deactivated successfully.
Dec  6 04:37:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setuser ceph since I am not root
Dec  6 04:37:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setgroup ceph since I am not root
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: pidfile_write: ignore empty --pid-file
Dec  6 04:37:56 np0005548915 podman[75059]: 2025-12-06 09:37:56.632302885 +0000 UTC m=+0.046426238 container create ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee (image=quay.io/ceph/ceph:v19, name=dazzling_ride, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'alerts'
Dec  6 04:37:56 np0005548915 systemd[1]: Started libpod-conmon-ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee.scope.
Dec  6 04:37:56 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0662175493b46420dd6a6f7bfd40e7b8835945637971112eaef1828dbbdd23c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0662175493b46420dd6a6f7bfd40e7b8835945637971112eaef1828dbbdd23c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0662175493b46420dd6a6f7bfd40e7b8835945637971112eaef1828dbbdd23c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:56 np0005548915 podman[75059]: 2025-12-06 09:37:56.611421117 +0000 UTC m=+0.025544480 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:56 np0005548915 podman[75059]: 2025-12-06 09:37:56.707095888 +0000 UTC m=+0.121219311 container init ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee (image=quay.io/ceph/ceph:v19, name=dazzling_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:37:56 np0005548915 podman[75059]: 2025-12-06 09:37:56.716153235 +0000 UTC m=+0.130276588 container start ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee (image=quay.io/ceph/ceph:v19, name=dazzling_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:37:56 np0005548915 podman[75059]: 2025-12-06 09:37:56.721455058 +0000 UTC m=+0.135578471 container attach ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee (image=quay.io/ceph/ceph:v19, name=dazzling_ride, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'balancer'
Dec  6 04:37:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:56.741+0000 7f8db8775140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  6 04:37:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:56.814+0000 7f8db8775140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  6 04:37:56 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'cephadm'
Dec  6 04:37:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  6 04:37:57 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/148877063' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  6 04:37:57 np0005548915 dazzling_ride[75095]: {
Dec  6 04:37:57 np0005548915 dazzling_ride[75095]:    "epoch": 5,
Dec  6 04:37:57 np0005548915 dazzling_ride[75095]:    "available": true,
Dec  6 04:37:57 np0005548915 dazzling_ride[75095]:    "active_name": "compute-0.qhdjwa",
Dec  6 04:37:57 np0005548915 dazzling_ride[75095]:    "num_standby": 0
Dec  6 04:37:57 np0005548915 dazzling_ride[75095]: }
Dec  6 04:37:57 np0005548915 systemd[1]: libpod-ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee.scope: Deactivated successfully.
Dec  6 04:37:57 np0005548915 podman[75059]: 2025-12-06 09:37:57.165165683 +0000 UTC m=+0.579289036 container died ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee (image=quay.io/ceph/ceph:v19, name=dazzling_ride, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:57 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a0662175493b46420dd6a6f7bfd40e7b8835945637971112eaef1828dbbdd23c-merged.mount: Deactivated successfully.
Dec  6 04:37:57 np0005548915 podman[75059]: 2025-12-06 09:37:57.214768223 +0000 UTC m=+0.628891576 container remove ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee (image=quay.io/ceph/ceph:v19, name=dazzling_ride, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:37:57 np0005548915 systemd[1]: libpod-conmon-ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee.scope: Deactivated successfully.
Dec  6 04:37:57 np0005548915 podman[75145]: 2025-12-06 09:37:57.277955568 +0000 UTC m=+0.041833599 container create f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db (image=quay.io/ceph/ceph:v19, name=silly_margulis, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:37:57 np0005548915 systemd[1]: Started libpod-conmon-f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db.scope.
Dec  6 04:37:57 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:37:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44773ff3210dfaaae43e31831c3f7db4ad37ebabcdc2f8f4b262a4a191bfd35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44773ff3210dfaaae43e31831c3f7db4ad37ebabcdc2f8f4b262a4a191bfd35/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44773ff3210dfaaae43e31831c3f7db4ad37ebabcdc2f8f4b262a4a191bfd35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:37:57 np0005548915 podman[75145]: 2025-12-06 09:37:57.256925117 +0000 UTC m=+0.020803198 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:37:57 np0005548915 podman[75145]: 2025-12-06 09:37:57.369817464 +0000 UTC m=+0.133695535 container init f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db (image=quay.io/ceph/ceph:v19, name=silly_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:37:57 np0005548915 podman[75145]: 2025-12-06 09:37:57.379466003 +0000 UTC m=+0.143344064 container start f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db (image=quay.io/ceph/ceph:v19, name=silly_margulis, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:37:57 np0005548915 podman[75145]: 2025-12-06 09:37:57.384204436 +0000 UTC m=+0.148082547 container attach f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db (image=quay.io/ceph/ceph:v19, name=silly_margulis, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  6 04:37:57 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1328164209' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  6 04:37:57 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'crash'
Dec  6 04:37:57 np0005548915 ceph-mgr[74618]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  6 04:37:57 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'dashboard'
Dec  6 04:37:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:57.622+0000 7f8db8775140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  6 04:37:58 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'devicehealth'
Dec  6 04:37:58 np0005548915 ceph-mgr[74618]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  6 04:37:58 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'diskprediction_local'
Dec  6 04:37:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:58.242+0000 7f8db8775140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  6 04:37:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  6 04:37:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  6 04:37:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:  from numpy import show_config as show_numpy_config
Dec  6 04:37:58 np0005548915 ceph-mgr[74618]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  6 04:37:58 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'influx'
Dec  6 04:37:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:58.391+0000 7f8db8775140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  6 04:37:58 np0005548915 ceph-mgr[74618]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  6 04:37:58 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'insights'
Dec  6 04:37:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:58.456+0000 7f8db8775140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  6 04:37:58 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'iostat'
Dec  6 04:37:58 np0005548915 ceph-mgr[74618]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  6 04:37:58 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'k8sevents'
Dec  6 04:37:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:58.585+0000 7f8db8775140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  6 04:37:58 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'localpool'
Dec  6 04:37:59 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'mds_autoscaler'
Dec  6 04:37:59 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'mirroring'
Dec  6 04:37:59 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'nfs'
Dec  6 04:37:59 np0005548915 ceph-mgr[74618]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  6 04:37:59 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'orchestrator'
Dec  6 04:37:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:59.526+0000 7f8db8775140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  6 04:37:59 np0005548915 ceph-mgr[74618]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  6 04:37:59 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'osd_perf_query'
Dec  6 04:37:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:59.727+0000 7f8db8775140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  6 04:37:59 np0005548915 ceph-mgr[74618]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  6 04:37:59 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'osd_support'
Dec  6 04:37:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:59.808+0000 7f8db8775140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  6 04:37:59 np0005548915 ceph-mgr[74618]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  6 04:37:59 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'pg_autoscaler'
Dec  6 04:37:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:59.869+0000 7f8db8775140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  6 04:37:59 np0005548915 ceph-mgr[74618]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  6 04:37:59 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'progress'
Dec  6 04:37:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:59.941+0000 7f8db8775140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  6 04:38:00 np0005548915 ceph-mgr[74618]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  6 04:38:00 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'prometheus'
Dec  6 04:38:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:00.005+0000 7f8db8775140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  6 04:38:00 np0005548915 ceph-mgr[74618]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  6 04:38:00 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rbd_support'
Dec  6 04:38:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:00.353+0000 7f8db8775140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  6 04:38:00 np0005548915 ceph-mgr[74618]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  6 04:38:00 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'restful'
Dec  6 04:38:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:00.446+0000 7f8db8775140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  6 04:38:00 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rgw'
Dec  6 04:38:00 np0005548915 ceph-mgr[74618]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  6 04:38:00 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rook'
Dec  6 04:38:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:00.864+0000 7f8db8775140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  6 04:38:01 np0005548915 ceph-mgr[74618]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  6 04:38:01 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'selftest'
Dec  6 04:38:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:01.429+0000 7f8db8775140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  6 04:38:01 np0005548915 ceph-mgr[74618]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  6 04:38:01 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'snap_schedule'
Dec  6 04:38:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:01.502+0000 7f8db8775140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  6 04:38:01 np0005548915 ceph-mgr[74618]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  6 04:38:01 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'stats'
Dec  6 04:38:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:01.588+0000 7f8db8775140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  6 04:38:01 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'status'
Dec  6 04:38:01 np0005548915 ceph-mgr[74618]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  6 04:38:01 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'telegraf'
Dec  6 04:38:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:01.747+0000 7f8db8775140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  6 04:38:01 np0005548915 ceph-mgr[74618]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  6 04:38:01 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'telemetry'
Dec  6 04:38:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:01.823+0000 7f8db8775140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  6 04:38:01 np0005548915 ceph-mgr[74618]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  6 04:38:01 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'test_orchestrator'
Dec  6 04:38:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:01.989+0000 7f8db8775140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'volumes'
Dec  6 04:38:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:02.221+0000 7f8db8775140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'zabbix'
Dec  6 04:38:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:02.508+0000 7f8db8775140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  6 04:38:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:02.576+0000 7f8db8775140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qhdjwa restarted
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qhdjwa
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: ms_deliver_dispatch: unhandled message 0x562e1eddad00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr handle_mgr_map Activating!
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr handle_mgr_map I am now activating
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.qhdjwa(active, starting, since 0.337067s)
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"} v 0)
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"}]: dispatch
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e1 all = 1
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: balancer
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [balancer INFO root] Starting
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Manager daemon compute-0.qhdjwa is now available
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:38:02
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [balancer INFO root] No pools available
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: Active manager daemon compute-0.qhdjwa restarted
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: Activating manager daemon compute-0.qhdjwa
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: Manager daemon compute-0.qhdjwa is now available
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: cephadm
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: crash
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: devicehealth
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [devicehealth INFO root] Starting
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: iostat
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: nfs
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: orchestrator
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  6 04:38:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: pg_autoscaler
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:38:02 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: progress
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [progress INFO root] Loading...
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [progress INFO root] No stored events to load
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [progress INFO root] Loaded [] historic events
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [progress INFO root] Loaded OSDMap, ready.
Dec  6 04:38:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  6 04:38:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] recovery thread starting
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] starting setup
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: rbd_support
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: restful
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [restful INFO root] server_addr: :: server_port: 8003
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [restful WARNING root] server not running: no certificate configured
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: status
Dec  6 04:38:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"} v 0)
Dec  6 04:38:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: telemetry
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] PerfHandler: starting
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TaskHandler: starting
Dec  6 04:38:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"} v 0)
Dec  6 04:38:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] setup complete
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: volumes
Dec  6 04:38:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019931811 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec  6 04:38:03 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.qhdjwa(active, since 1.34916s)
Dec  6 04:38:03 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec  6 04:38:03 np0005548915 silly_margulis[75161]: {
Dec  6 04:38:03 np0005548915 silly_margulis[75161]:    "mgrmap_epoch": 7,
Dec  6 04:38:03 np0005548915 silly_margulis[75161]:    "initialized": true
Dec  6 04:38:03 np0005548915 silly_margulis[75161]: }
Dec  6 04:38:03 np0005548915 systemd[1]: libpod-f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db.scope: Deactivated successfully.
Dec  6 04:38:03 np0005548915 podman[75145]: 2025-12-06 09:38:03.971833375 +0000 UTC m=+6.735711446 container died f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db (image=quay.io/ceph/ceph:v19, name=silly_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:38:03 np0005548915 ceph-mon[74327]: Found migration_current of "None". Setting to last migration.
Dec  6 04:38:03 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:03 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:03 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec  6 04:38:03 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec  6 04:38:04 np0005548915 systemd[1]: var-lib-containers-storage-overlay-e44773ff3210dfaaae43e31831c3f7db4ad37ebabcdc2f8f4b262a4a191bfd35-merged.mount: Deactivated successfully.
Dec  6 04:38:04 np0005548915 podman[75145]: 2025-12-06 09:38:04.023560187 +0000 UTC m=+6.787438258 container remove f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db (image=quay.io/ceph/ceph:v19, name=silly_margulis, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:38:04 np0005548915 systemd[1]: libpod-conmon-f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db.scope: Deactivated successfully.
Dec  6 04:38:04 np0005548915 podman[75310]: 2025-12-06 09:38:04.120261617 +0000 UTC m=+0.060967303 container create 91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac (image=quay.io/ceph/ceph:v19, name=sweet_hermann, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec  6 04:38:04 np0005548915 systemd[1]: Started libpod-conmon-91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac.scope.
Dec  6 04:38:04 np0005548915 podman[75310]: 2025-12-06 09:38:04.092604956 +0000 UTC m=+0.033310692 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:04 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4ab5675b98f1dc2ae993ec1053c6516f3ff52798f258667ddc7b4be6fee54a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4ab5675b98f1dc2ae993ec1053c6516f3ff52798f258667ddc7b4be6fee54a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4ab5675b98f1dc2ae993ec1053c6516f3ff52798f258667ddc7b4be6fee54a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:04 np0005548915 podman[75310]: 2025-12-06 09:38:04.221020397 +0000 UTC m=+0.161726123 container init 91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac (image=quay.io/ceph/ceph:v19, name=sweet_hermann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:38:04 np0005548915 podman[75310]: 2025-12-06 09:38:04.231043353 +0000 UTC m=+0.171749039 container start 91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac (image=quay.io/ceph/ceph:v19, name=sweet_hermann, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  6 04:38:04 np0005548915 podman[75310]: 2025-12-06 09:38:04.235207924 +0000 UTC m=+0.175913690 container attach 91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac (image=quay.io/ceph/ceph:v19, name=sweet_hermann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:38:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Dec  6 04:38:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Dec  6 04:38:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:04 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:38:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec  6 04:38:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  6 04:38:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  6 04:38:04 np0005548915 systemd[1]: libpod-91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac.scope: Deactivated successfully.
Dec  6 04:38:04 np0005548915 podman[75310]: 2025-12-06 09:38:04.692232559 +0000 UTC m=+0.632938215 container died 91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac (image=quay.io/ceph/ceph:v19, name=sweet_hermann, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:38:04 np0005548915 systemd[1]: var-lib-containers-storage-overlay-3c4ab5675b98f1dc2ae993ec1053c6516f3ff52798f258667ddc7b4be6fee54a-merged.mount: Deactivated successfully.
Dec  6 04:38:04 np0005548915 podman[75310]: 2025-12-06 09:38:04.732921835 +0000 UTC m=+0.673627491 container remove 91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac (image=quay.io/ceph/ceph:v19, name=sweet_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 04:38:04 np0005548915 systemd[1]: libpod-conmon-91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac.scope: Deactivated successfully.
Dec  6 04:38:04 np0005548915 podman[75365]: 2025-12-06 09:38:04.812152334 +0000 UTC m=+0.055686130 container create 98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643 (image=quay.io/ceph/ceph:v19, name=nifty_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  6 04:38:04 np0005548915 systemd[1]: Started libpod-conmon-98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643.scope.
Dec  6 04:38:04 np0005548915 podman[75365]: 2025-12-06 09:38:04.781160768 +0000 UTC m=+0.024694624 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:04 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41db95901aa15bd2936c55b7d8a51b136b1d682f3a8c34931cdd152823e892e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41db95901aa15bd2936c55b7d8a51b136b1d682f3a8c34931cdd152823e892e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41db95901aa15bd2936c55b7d8a51b136b1d682f3a8c34931cdd152823e892e8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:04 np0005548915 podman[75365]: 2025-12-06 09:38:04.914587257 +0000 UTC m=+0.158121043 container init 98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643 (image=quay.io/ceph/ceph:v19, name=nifty_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  6 04:38:04 np0005548915 podman[75365]: 2025-12-06 09:38:04.921260667 +0000 UTC m=+0.164794453 container start 98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643 (image=quay.io/ceph/ceph:v19, name=nifty_euler, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 04:38:04 np0005548915 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  6 04:38:04 np0005548915 podman[75365]: 2025-12-06 09:38:04.926911367 +0000 UTC m=+0.170445163 container attach 98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643 (image=quay.io/ceph/ceph:v19, name=nifty_euler, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:38:05] ENGINE Bus STARTING
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:38:05] ENGINE Bus STARTING
Dec  6 04:38:05 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:05 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:05 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:05 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.qhdjwa(active, since 2s)
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:38:05] ENGINE Serving on https://192.168.122.100:7150
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:38:05] ENGINE Serving on https://192.168.122.100:7150
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:38:05] ENGINE Client ('192.168.122.100', 46222) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:38:05] ENGINE Client ('192.168.122.100', 46222) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:38:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec  6 04:38:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Set ssh ssh_user
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec  6 04:38:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec  6 04:38:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Set ssh ssh_config
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec  6 04:38:05 np0005548915 nifty_euler[75381]: ssh user set to ceph-admin. sudo will be used
Dec  6 04:38:05 np0005548915 systemd[1]: libpod-98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643.scope: Deactivated successfully.
Dec  6 04:38:05 np0005548915 podman[75365]: 2025-12-06 09:38:05.348541171 +0000 UTC m=+0.592074927 container died 98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643 (image=quay.io/ceph/ceph:v19, name=nifty_euler, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 04:38:05 np0005548915 systemd[1]: var-lib-containers-storage-overlay-41db95901aa15bd2936c55b7d8a51b136b1d682f3a8c34931cdd152823e892e8-merged.mount: Deactivated successfully.
Dec  6 04:38:05 np0005548915 podman[75365]: 2025-12-06 09:38:05.389034092 +0000 UTC m=+0.632567848 container remove 98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643 (image=quay.io/ceph/ceph:v19, name=nifty_euler, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:38:05] ENGINE Serving on http://192.168.122.100:8765
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:38:05] ENGINE Serving on http://192.168.122.100:8765
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:38:05] ENGINE Bus STARTED
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:38:05] ENGINE Bus STARTED
Dec  6 04:38:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  6 04:38:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  6 04:38:05 np0005548915 systemd[1]: libpod-conmon-98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643.scope: Deactivated successfully.
Dec  6 04:38:05 np0005548915 podman[75441]: 2025-12-06 09:38:05.491954694 +0000 UTC m=+0.069126833 container create 67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225 (image=quay.io/ceph/ceph:v19, name=dazzling_mahavira, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:38:05 np0005548915 systemd[1]: Started libpod-conmon-67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225.scope.
Dec  6 04:38:05 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe34958966a58e42a6dda41775600efca8ab728c80121bbd9608695dfa85550/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe34958966a58e42a6dda41775600efca8ab728c80121bbd9608695dfa85550/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe34958966a58e42a6dda41775600efca8ab728c80121bbd9608695dfa85550/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe34958966a58e42a6dda41775600efca8ab728c80121bbd9608695dfa85550/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe34958966a58e42a6dda41775600efca8ab728c80121bbd9608695dfa85550/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:05 np0005548915 podman[75441]: 2025-12-06 09:38:05.467393334 +0000 UTC m=+0.044565553 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:05 np0005548915 podman[75441]: 2025-12-06 09:38:05.566064753 +0000 UTC m=+0.143236972 container init 67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225 (image=quay.io/ceph/ceph:v19, name=dazzling_mahavira, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  6 04:38:05 np0005548915 podman[75441]: 2025-12-06 09:38:05.571448498 +0000 UTC m=+0.148620667 container start 67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225 (image=quay.io/ceph/ceph:v19, name=dazzling_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  6 04:38:05 np0005548915 podman[75441]: 2025-12-06 09:38:05.575651111 +0000 UTC m=+0.152823290 container attach 67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225 (image=quay.io/ceph/ceph:v19, name=dazzling_mahavira, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:38:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec  6 04:38:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Set ssh ssh_identity_key
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Set ssh private key
Dec  6 04:38:05 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Set ssh private key
Dec  6 04:38:05 np0005548915 systemd[1]: libpod-67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225.scope: Deactivated successfully.
Dec  6 04:38:05 np0005548915 podman[75441]: 2025-12-06 09:38:05.95877032 +0000 UTC m=+0.535942479 container died 67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225 (image=quay.io/ceph/ceph:v19, name=dazzling_mahavira, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:38:05 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7fe34958966a58e42a6dda41775600efca8ab728c80121bbd9608695dfa85550-merged.mount: Deactivated successfully.
Dec  6 04:38:06 np0005548915 podman[75441]: 2025-12-06 09:38:06.010679705 +0000 UTC m=+0.587851874 container remove 67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225 (image=quay.io/ceph/ceph:v19, name=dazzling_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 04:38:06 np0005548915 systemd[1]: libpod-conmon-67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225.scope: Deactivated successfully.
Dec  6 04:38:06 np0005548915 podman[75497]: 2025-12-06 09:38:06.096140616 +0000 UTC m=+0.058165208 container create 814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547 (image=quay.io/ceph/ceph:v19, name=vibrant_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:38:06 np0005548915 systemd[1]: Started libpod-conmon-814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547.scope.
Dec  6 04:38:06 np0005548915 podman[75497]: 2025-12-06 09:38:06.081335266 +0000 UTC m=+0.043359878 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:06 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18827f65d8368e4a7c3ed41f505e5dd5c9852840c112612421305c8d63b55b76/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18827f65d8368e4a7c3ed41f505e5dd5c9852840c112612421305c8d63b55b76/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18827f65d8368e4a7c3ed41f505e5dd5c9852840c112612421305c8d63b55b76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18827f65d8368e4a7c3ed41f505e5dd5c9852840c112612421305c8d63b55b76/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18827f65d8368e4a7c3ed41f505e5dd5c9852840c112612421305c8d63b55b76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:06 np0005548915 podman[75497]: 2025-12-06 09:38:06.190630703 +0000 UTC m=+0.152655345 container init 814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547 (image=quay.io/ceph/ceph:v19, name=vibrant_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 04:38:06 np0005548915 podman[75497]: 2025-12-06 09:38:06.206016294 +0000 UTC m=+0.168040886 container start 814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547 (image=quay.io/ceph/ceph:v19, name=vibrant_elbakyan, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  6 04:38:06 np0005548915 podman[75497]: 2025-12-06 09:38:06.209439891 +0000 UTC m=+0.171464483 container attach 814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547 (image=quay.io/ceph/ceph:v19, name=vibrant_elbakyan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  6 04:38:06 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:38:05] ENGINE Bus STARTING
Dec  6 04:38:06 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:38:05] ENGINE Serving on https://192.168.122.100:7150
Dec  6 04:38:06 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:38:05] ENGINE Client ('192.168.122.100', 46222) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  6 04:38:06 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:06 np0005548915 ceph-mon[74327]: Set ssh ssh_user
Dec  6 04:38:06 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:06 np0005548915 ceph-mon[74327]: Set ssh ssh_config
Dec  6 04:38:06 np0005548915 ceph-mon[74327]: ssh user set to ceph-admin. sudo will be used
Dec  6 04:38:06 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:38:05] ENGINE Serving on http://192.168.122.100:8765
Dec  6 04:38:06 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:38:05] ENGINE Bus STARTED
Dec  6 04:38:06 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:06 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:38:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec  6 04:38:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:06 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec  6 04:38:06 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec  6 04:38:06 np0005548915 systemd[1]: libpod-814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547.scope: Deactivated successfully.
Dec  6 04:38:06 np0005548915 podman[75497]: 2025-12-06 09:38:06.594759094 +0000 UTC m=+0.556783746 container died 814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547 (image=quay.io/ceph/ceph:v19, name=vibrant_elbakyan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:38:06 np0005548915 systemd[1]: var-lib-containers-storage-overlay-18827f65d8368e4a7c3ed41f505e5dd5c9852840c112612421305c8d63b55b76-merged.mount: Deactivated successfully.
Dec  6 04:38:06 np0005548915 podman[75497]: 2025-12-06 09:38:06.634716166 +0000 UTC m=+0.596740808 container remove 814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547 (image=quay.io/ceph/ceph:v19, name=vibrant_elbakyan, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:38:06 np0005548915 systemd[1]: libpod-conmon-814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547.scope: Deactivated successfully.
Dec  6 04:38:06 np0005548915 podman[75552]: 2025-12-06 09:38:06.734746471 +0000 UTC m=+0.064221767 container create 8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570 (image=quay.io/ceph/ceph:v19, name=affectionate_lichterman, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:38:06 np0005548915 systemd[1]: Started libpod-conmon-8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570.scope.
Dec  6 04:38:06 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60c49a5543c038e2c7ad25a3b155f75b6b5c01ff4cb7dd8a5003aae17d69ccc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60c49a5543c038e2c7ad25a3b155f75b6b5c01ff4cb7dd8a5003aae17d69ccc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60c49a5543c038e2c7ad25a3b155f75b6b5c01ff4cb7dd8a5003aae17d69ccc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:06 np0005548915 podman[75552]: 2025-12-06 09:38:06.71422466 +0000 UTC m=+0.043699946 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:06 np0005548915 podman[75552]: 2025-12-06 09:38:06.823645649 +0000 UTC m=+0.153120925 container init 8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570 (image=quay.io/ceph/ceph:v19, name=affectionate_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:38:06 np0005548915 podman[75552]: 2025-12-06 09:38:06.832636084 +0000 UTC m=+0.162111360 container start 8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570 (image=quay.io/ceph/ceph:v19, name=affectionate_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  6 04:38:06 np0005548915 podman[75552]: 2025-12-06 09:38:06.836063452 +0000 UTC m=+0.165538758 container attach 8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570 (image=quay.io/ceph/ceph:v19, name=affectionate_lichterman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 04:38:06 np0005548915 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  6 04:38:07 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:38:07 np0005548915 affectionate_lichterman[75569]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr6+qXxL7AUoz6da9uYOaWBQVg93dmp6B4R2YfuW7AUrOvPCB5ME9ViFnWrnivTbTxvEEoK75W+01vhVovMJYBez4JJzeN+FwqLcHALLyaRKfnHPJBnd9vk1AKqgh05Mcv8diCcMCRdRYgNXDJS0/hZ6tFAM3/YFu07KsgsGgP86KG8dqKEzKvWEiXpg63wz4g1JufT5u5vePJ15cRiWA0NyjEQgHmrLrv02lvP7Tz/y0+h4GWHaHuIjMXfdG56OkCx1NM/QEyHmEGheBwcbg874x1+nt7wMtMGZ1QatviZ6fxs5OK5qqiLu3aBnJMmEa124CRz1/L8fxSFeTlARBG6jr95DSRCQOFWvONY/yVCv5LN+HDHDzQKdK4qdMcpZW0dbifaJuCkEE0iIgei1ExA86w8d1Zo22xnHOgN3FYcS/LbMtn8yIyX6oaNhmuu6wgNe/k9LP28whRIH5x+Xj3U79uE0bKko6M8x6zVM2tkT9pt3zRH8Fyz/Trklu/GI8= zuul@controller
Dec  6 04:38:07 np0005548915 systemd[1]: libpod-8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570.scope: Deactivated successfully.
Dec  6 04:38:07 np0005548915 podman[75552]: 2025-12-06 09:38:07.202198509 +0000 UTC m=+0.531673825 container died 8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570 (image=quay.io/ceph/ceph:v19, name=affectionate_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:38:07 np0005548915 systemd[1]: var-lib-containers-storage-overlay-e60c49a5543c038e2c7ad25a3b155f75b6b5c01ff4cb7dd8a5003aae17d69ccc-merged.mount: Deactivated successfully.
Dec  6 04:38:07 np0005548915 podman[75552]: 2025-12-06 09:38:07.2472207 +0000 UTC m=+0.576695966 container remove 8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570 (image=quay.io/ceph/ceph:v19, name=affectionate_lichterman, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:38:07 np0005548915 systemd[1]: libpod-conmon-8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570.scope: Deactivated successfully.
Dec  6 04:38:07 np0005548915 podman[75607]: 2025-12-06 09:38:07.34438396 +0000 UTC m=+0.068323867 container create 0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9 (image=quay.io/ceph/ceph:v19, name=eager_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Dec  6 04:38:07 np0005548915 systemd[1]: Started libpod-conmon-0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9.scope.
Dec  6 04:38:07 np0005548915 podman[75607]: 2025-12-06 09:38:07.308689431 +0000 UTC m=+0.032629408 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:07 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a25b6de5f301cb0fcd7a5d4d2d8e890530ec6fcf70ec524ba31f91af5e95fc7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a25b6de5f301cb0fcd7a5d4d2d8e890530ec6fcf70ec524ba31f91af5e95fc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a25b6de5f301cb0fcd7a5d4d2d8e890530ec6fcf70ec524ba31f91af5e95fc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:07 np0005548915 podman[75607]: 2025-12-06 09:38:07.428746339 +0000 UTC m=+0.152686276 container init 0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9 (image=quay.io/ceph/ceph:v19, name=eager_euler, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Dec  6 04:38:07 np0005548915 podman[75607]: 2025-12-06 09:38:07.434297027 +0000 UTC m=+0.158236934 container start 0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9 (image=quay.io/ceph/ceph:v19, name=eager_euler, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:38:07 np0005548915 podman[75607]: 2025-12-06 09:38:07.438370047 +0000 UTC m=+0.162309974 container attach 0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9 (image=quay.io/ceph/ceph:v19, name=eager_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Dec  6 04:38:07 np0005548915 ceph-mon[74327]: Set ssh ssh_identity_key
Dec  6 04:38:07 np0005548915 ceph-mon[74327]: Set ssh private key
Dec  6 04:38:07 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:07 np0005548915 ceph-mon[74327]: Set ssh ssh_identity_pub
Dec  6 04:38:07 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:38:07 np0005548915 systemd[1]: Created slice User Slice of UID 42477.
Dec  6 04:38:08 np0005548915 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  6 04:38:08 np0005548915 systemd-logind[795]: New session 21 of user ceph-admin.
Dec  6 04:38:08 np0005548915 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  6 04:38:08 np0005548915 systemd[1]: Starting User Manager for UID 42477...
Dec  6 04:38:08 np0005548915 systemd-logind[795]: New session 23 of user ceph-admin.
Dec  6 04:38:08 np0005548915 systemd[75653]: Queued start job for default target Main User Target.
Dec  6 04:38:08 np0005548915 systemd[75653]: Created slice User Application Slice.
Dec  6 04:38:08 np0005548915 systemd[75653]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  6 04:38:08 np0005548915 systemd[75653]: Started Daily Cleanup of User's Temporary Directories.
Dec  6 04:38:08 np0005548915 systemd[75653]: Reached target Paths.
Dec  6 04:38:08 np0005548915 systemd[75653]: Reached target Timers.
Dec  6 04:38:08 np0005548915 systemd[75653]: Starting D-Bus User Message Bus Socket...
Dec  6 04:38:08 np0005548915 systemd[75653]: Starting Create User's Volatile Files and Directories...
Dec  6 04:38:08 np0005548915 systemd[75653]: Listening on D-Bus User Message Bus Socket.
Dec  6 04:38:08 np0005548915 systemd[75653]: Reached target Sockets.
Dec  6 04:38:08 np0005548915 systemd[75653]: Finished Create User's Volatile Files and Directories.
Dec  6 04:38:08 np0005548915 systemd[75653]: Reached target Basic System.
Dec  6 04:38:08 np0005548915 systemd[75653]: Reached target Main User Target.
Dec  6 04:38:08 np0005548915 systemd[75653]: Startup finished in 177ms.
Dec  6 04:38:08 np0005548915 systemd[1]: Started User Manager for UID 42477.
Dec  6 04:38:08 np0005548915 systemd[1]: Started Session 21 of User ceph-admin.
Dec  6 04:38:08 np0005548915 systemd[1]: Started Session 23 of User ceph-admin.
Dec  6 04:38:08 np0005548915 systemd-logind[795]: New session 24 of user ceph-admin.
Dec  6 04:38:08 np0005548915 systemd[1]: Started Session 24 of User ceph-admin.
Dec  6 04:38:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053159 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:38:08 np0005548915 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  6 04:38:09 np0005548915 systemd-logind[795]: New session 25 of user ceph-admin.
Dec  6 04:38:09 np0005548915 systemd[1]: Started Session 25 of User ceph-admin.
Dec  6 04:38:09 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec  6 04:38:09 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec  6 04:38:09 np0005548915 systemd-logind[795]: New session 26 of user ceph-admin.
Dec  6 04:38:09 np0005548915 systemd[1]: Started Session 26 of User ceph-admin.
Dec  6 04:38:09 np0005548915 systemd-logind[795]: New session 27 of user ceph-admin.
Dec  6 04:38:09 np0005548915 systemd[1]: Started Session 27 of User ceph-admin.
Dec  6 04:38:10 np0005548915 systemd-logind[795]: New session 28 of user ceph-admin.
Dec  6 04:38:10 np0005548915 systemd[1]: Started Session 28 of User ceph-admin.
Dec  6 04:38:10 np0005548915 ceph-mon[74327]: Deploying cephadm binary to compute-0
Dec  6 04:38:10 np0005548915 systemd-logind[795]: New session 29 of user ceph-admin.
Dec  6 04:38:10 np0005548915 systemd[1]: Started Session 29 of User ceph-admin.
Dec  6 04:38:10 np0005548915 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  6 04:38:11 np0005548915 systemd-logind[795]: New session 30 of user ceph-admin.
Dec  6 04:38:11 np0005548915 systemd[1]: Started Session 30 of User ceph-admin.
Dec  6 04:38:11 np0005548915 systemd-logind[795]: New session 31 of user ceph-admin.
Dec  6 04:38:11 np0005548915 systemd[1]: Started Session 31 of User ceph-admin.
Dec  6 04:38:12 np0005548915 systemd-logind[795]: New session 32 of user ceph-admin.
Dec  6 04:38:12 np0005548915 systemd[1]: Started Session 32 of User ceph-admin.
Dec  6 04:38:12 np0005548915 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  6 04:38:13 np0005548915 systemd-logind[795]: New session 33 of user ceph-admin.
Dec  6 04:38:13 np0005548915 systemd[1]: Started Session 33 of User ceph-admin.
Dec  6 04:38:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  6 04:38:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:13 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Added host compute-0
Dec  6 04:38:13 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  6 04:38:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  6 04:38:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  6 04:38:13 np0005548915 eager_euler[75623]: Added host 'compute-0' with addr '192.168.122.100'
Dec  6 04:38:13 np0005548915 systemd[1]: libpod-0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9.scope: Deactivated successfully.
Dec  6 04:38:13 np0005548915 podman[76014]: 2025-12-06 09:38:13.666077341 +0000 UTC m=+0.044292777 container died 0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9 (image=quay.io/ceph/ceph:v19, name=eager_euler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:38:13 np0005548915 systemd[1]: var-lib-containers-storage-overlay-3a25b6de5f301cb0fcd7a5d4d2d8e890530ec6fcf70ec524ba31f91af5e95fc7-merged.mount: Deactivated successfully.
Dec  6 04:38:13 np0005548915 podman[76014]: 2025-12-06 09:38:13.704369929 +0000 UTC m=+0.082585355 container remove 0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9 (image=quay.io/ceph/ceph:v19, name=eager_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:38:13 np0005548915 systemd[1]: libpod-conmon-0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9.scope: Deactivated successfully.
Dec  6 04:38:13 np0005548915 podman[76071]: 2025-12-06 09:38:13.786817641 +0000 UTC m=+0.050443197 container create 135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:38:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:38:13 np0005548915 systemd[1]: Started libpod-conmon-135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72.scope.
Dec  6 04:38:13 np0005548915 podman[76071]: 2025-12-06 09:38:13.767095266 +0000 UTC m=+0.030720852 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:13 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7522fb92c249476623c24ea53ce87b160c057d81dd5ae9993aa6aac8e3ef7fbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7522fb92c249476623c24ea53ce87b160c057d81dd5ae9993aa6aac8e3ef7fbb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7522fb92c249476623c24ea53ce87b160c057d81dd5ae9993aa6aac8e3ef7fbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:13 np0005548915 podman[76071]: 2025-12-06 09:38:13.912786234 +0000 UTC m=+0.176411860 container init 135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  6 04:38:13 np0005548915 podman[76071]: 2025-12-06 09:38:13.923643736 +0000 UTC m=+0.187269292 container start 135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:38:13 np0005548915 podman[76071]: 2025-12-06 09:38:13.927852368 +0000 UTC m=+0.191477994 container attach 135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  6 04:38:14 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:38:14 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec  6 04:38:14 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec  6 04:38:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  6 04:38:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:14 np0005548915 flamboyant_austin[76089]: Scheduled mon update...
Dec  6 04:38:14 np0005548915 systemd[1]: libpod-135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72.scope: Deactivated successfully.
Dec  6 04:38:14 np0005548915 podman[76071]: 2025-12-06 09:38:14.360566348 +0000 UTC m=+0.624191924 container died 135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  6 04:38:14 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7522fb92c249476623c24ea53ce87b160c057d81dd5ae9993aa6aac8e3ef7fbb-merged.mount: Deactivated successfully.
Dec  6 04:38:14 np0005548915 podman[76071]: 2025-12-06 09:38:14.410894762 +0000 UTC m=+0.674520348 container remove 135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  6 04:38:14 np0005548915 systemd[1]: libpod-conmon-135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72.scope: Deactivated successfully.
Dec  6 04:38:14 np0005548915 podman[76106]: 2025-12-06 09:38:14.509768845 +0000 UTC m=+0.512615613 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:14 np0005548915 podman[76152]: 2025-12-06 09:38:14.537067779 +0000 UTC m=+0.092529740 container create d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5 (image=quay.io/ceph/ceph:v19, name=trusting_ganguly, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 04:38:14 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:14 np0005548915 ceph-mon[74327]: Added host compute-0
Dec  6 04:38:14 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:14 np0005548915 systemd[1]: Started libpod-conmon-d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5.scope.
Dec  6 04:38:14 np0005548915 podman[76152]: 2025-12-06 09:38:14.490448777 +0000 UTC m=+0.045910748 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:14 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e357bfceab149a8ccd4310569d78b451b725eb6f576b16bf1cec9c0aa1cbf004/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e357bfceab149a8ccd4310569d78b451b725eb6f576b16bf1cec9c0aa1cbf004/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e357bfceab149a8ccd4310569d78b451b725eb6f576b16bf1cec9c0aa1cbf004/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:14 np0005548915 podman[76152]: 2025-12-06 09:38:14.651774652 +0000 UTC m=+0.207236653 container init d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5 (image=quay.io/ceph/ceph:v19, name=trusting_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:38:14 np0005548915 podman[76152]: 2025-12-06 09:38:14.666503899 +0000 UTC m=+0.221965850 container start d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5 (image=quay.io/ceph/ceph:v19, name=trusting_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:38:14 np0005548915 podman[76152]: 2025-12-06 09:38:14.670347324 +0000 UTC m=+0.225809295 container attach d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5 (image=quay.io/ceph/ceph:v19, name=trusting_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:38:14 np0005548915 podman[76179]: 2025-12-06 09:38:14.694327523 +0000 UTC m=+0.075273692 container create 6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a (image=quay.io/ceph/ceph:v19, name=sleepy_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  6 04:38:14 np0005548915 systemd[1]: Started libpod-conmon-6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a.scope.
Dec  6 04:38:14 np0005548915 podman[76179]: 2025-12-06 09:38:14.664304306 +0000 UTC m=+0.045250525 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:14 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:14 np0005548915 podman[76179]: 2025-12-06 09:38:14.792949141 +0000 UTC m=+0.173895280 container init 6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a (image=quay.io/ceph/ceph:v19, name=sleepy_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  6 04:38:14 np0005548915 podman[76179]: 2025-12-06 09:38:14.79801947 +0000 UTC m=+0.178965629 container start 6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a (image=quay.io/ceph/ceph:v19, name=sleepy_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  6 04:38:14 np0005548915 podman[76179]: 2025-12-06 09:38:14.802253833 +0000 UTC m=+0.183200062 container attach 6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a (image=quay.io/ceph/ceph:v19, name=sleepy_brattain, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  6 04:38:14 np0005548915 sleepy_brattain[76199]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec  6 04:38:14 np0005548915 systemd[1]: libpod-6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a.scope: Deactivated successfully.
Dec  6 04:38:14 np0005548915 podman[76179]: 2025-12-06 09:38:14.89569569 +0000 UTC m=+0.276641849 container died 6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a (image=quay.io/ceph/ceph:v19, name=sleepy_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 04:38:14 np0005548915 systemd[1]: var-lib-containers-storage-overlay-e8615dc0d120f5a91f38ae865b06dfd38f7b69726079a9ef1236dd8ce2e64a4c-merged.mount: Deactivated successfully.
Dec  6 04:38:14 np0005548915 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  6 04:38:14 np0005548915 podman[76179]: 2025-12-06 09:38:14.950610283 +0000 UTC m=+0.331556452 container remove 6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a (image=quay.io/ceph/ceph:v19, name=sleepy_brattain, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  6 04:38:14 np0005548915 systemd[1]: libpod-conmon-6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a.scope: Deactivated successfully.
Dec  6 04:38:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec  6 04:38:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:15 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:38:15 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec  6 04:38:15 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec  6 04:38:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  6 04:38:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:15 np0005548915 trusting_ganguly[76177]: Scheduled mgr update...
Dec  6 04:38:15 np0005548915 systemd[1]: libpod-d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5.scope: Deactivated successfully.
Dec  6 04:38:15 np0005548915 podman[76152]: 2025-12-06 09:38:15.109381538 +0000 UTC m=+0.664843449 container died d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5 (image=quay.io/ceph/ceph:v19, name=trusting_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:38:15 np0005548915 systemd[1]: var-lib-containers-storage-overlay-e357bfceab149a8ccd4310569d78b451b725eb6f576b16bf1cec9c0aa1cbf004-merged.mount: Deactivated successfully.
Dec  6 04:38:15 np0005548915 podman[76152]: 2025-12-06 09:38:15.157467247 +0000 UTC m=+0.712929198 container remove d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5 (image=quay.io/ceph/ceph:v19, name=trusting_ganguly, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 04:38:15 np0005548915 systemd[1]: libpod-conmon-d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5.scope: Deactivated successfully.
Dec  6 04:38:15 np0005548915 podman[76296]: 2025-12-06 09:38:15.223680072 +0000 UTC m=+0.048053900 container create 2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba (image=quay.io/ceph/ceph:v19, name=friendly_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  6 04:38:15 np0005548915 systemd[1]: Started libpod-conmon-2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba.scope.
Dec  6 04:38:15 np0005548915 podman[76296]: 2025-12-06 09:38:15.203362874 +0000 UTC m=+0.027736682 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:15 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce5aa46063f04cb2cd3c7003cfd366ee94b11ee56d3e5191c8ae0dea90e0dcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce5aa46063f04cb2cd3c7003cfd366ee94b11ee56d3e5191c8ae0dea90e0dcd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce5aa46063f04cb2cd3c7003cfd366ee94b11ee56d3e5191c8ae0dea90e0dcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:15 np0005548915 podman[76296]: 2025-12-06 09:38:15.330067242 +0000 UTC m=+0.154441060 container init 2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba (image=quay.io/ceph/ceph:v19, name=friendly_bose, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:38:15 np0005548915 podman[76296]: 2025-12-06 09:38:15.341240081 +0000 UTC m=+0.165613909 container start 2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba (image=quay.io/ceph/ceph:v19, name=friendly_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:38:15 np0005548915 podman[76296]: 2025-12-06 09:38:15.345758228 +0000 UTC m=+0.170132056 container attach 2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba (image=quay.io/ceph/ceph:v19, name=friendly_bose, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:38:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:38:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:15 np0005548915 ceph-mon[74327]: Saving service mon spec with placement count:5
Dec  6 04:38:15 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:15 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:15 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:15 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:38:15 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service crash spec with placement *
Dec  6 04:38:15 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec  6 04:38:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  6 04:38:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:15 np0005548915 friendly_bose[76314]: Scheduled crash update...
Dec  6 04:38:15 np0005548915 systemd[1]: libpod-2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba.scope: Deactivated successfully.
Dec  6 04:38:15 np0005548915 podman[76296]: 2025-12-06 09:38:15.766186488 +0000 UTC m=+0.590560336 container died 2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba (image=quay.io/ceph/ceph:v19, name=friendly_bose, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  6 04:38:15 np0005548915 systemd[1]: var-lib-containers-storage-overlay-3ce5aa46063f04cb2cd3c7003cfd366ee94b11ee56d3e5191c8ae0dea90e0dcd-merged.mount: Deactivated successfully.
Dec  6 04:38:15 np0005548915 podman[76296]: 2025-12-06 09:38:15.814782539 +0000 UTC m=+0.639156337 container remove 2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba (image=quay.io/ceph/ceph:v19, name=friendly_bose, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  6 04:38:15 np0005548915 systemd[1]: libpod-conmon-2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba.scope: Deactivated successfully.
Dec  6 04:38:15 np0005548915 podman[76420]: 2025-12-06 09:38:15.895363703 +0000 UTC m=+0.052081318 container create 02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f (image=quay.io/ceph/ceph:v19, name=funny_carson, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  6 04:38:15 np0005548915 systemd[1]: Started libpod-conmon-02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f.scope.
Dec  6 04:38:15 np0005548915 podman[76420]: 2025-12-06 09:38:15.879067625 +0000 UTC m=+0.035785260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:15 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8902bf25bbaa6f4e0d44f7733f52228cfd6dd31890f0a675898cfc18407ea039/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8902bf25bbaa6f4e0d44f7733f52228cfd6dd31890f0a675898cfc18407ea039/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8902bf25bbaa6f4e0d44f7733f52228cfd6dd31890f0a675898cfc18407ea039/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:16 np0005548915 podman[76420]: 2025-12-06 09:38:16.003301184 +0000 UTC m=+0.160018859 container init 02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f (image=quay.io/ceph/ceph:v19, name=funny_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  6 04:38:16 np0005548915 podman[76420]: 2025-12-06 09:38:16.012801679 +0000 UTC m=+0.169519304 container start 02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f (image=quay.io/ceph/ceph:v19, name=funny_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  6 04:38:16 np0005548915 podman[76420]: 2025-12-06 09:38:16.016463941 +0000 UTC m=+0.173181606 container attach 02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f (image=quay.io/ceph/ceph:v19, name=funny_carson, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:38:16 np0005548915 podman[76531]: 2025-12-06 09:38:16.320720879 +0000 UTC m=+0.080039455 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:38:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec  6 04:38:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2619669888' entity='client.admin' 
Dec  6 04:38:16 np0005548915 systemd[1]: libpod-02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f.scope: Deactivated successfully.
Dec  6 04:38:16 np0005548915 podman[76531]: 2025-12-06 09:38:16.420020351 +0000 UTC m=+0.179338877 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:38:16 np0005548915 podman[76420]: 2025-12-06 09:38:16.439788277 +0000 UTC m=+0.596505942 container died 02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f (image=quay.io/ceph/ceph:v19, name=funny_carson, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  6 04:38:16 np0005548915 systemd[1]: var-lib-containers-storage-overlay-8902bf25bbaa6f4e0d44f7733f52228cfd6dd31890f0a675898cfc18407ea039-merged.mount: Deactivated successfully.
Dec  6 04:38:16 np0005548915 podman[76420]: 2025-12-06 09:38:16.49311075 +0000 UTC m=+0.649828395 container remove 02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f (image=quay.io/ceph/ceph:v19, name=funny_carson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  6 04:38:16 np0005548915 systemd[1]: libpod-conmon-02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f.scope: Deactivated successfully.
Dec  6 04:38:16 np0005548915 podman[76579]: 2025-12-06 09:38:16.578329425 +0000 UTC m=+0.060549654 container create 00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586 (image=quay.io/ceph/ceph:v19, name=jolly_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:38:16 np0005548915 ceph-mon[74327]: Saving service mgr spec with placement count:2
Dec  6 04:38:16 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:16 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2619669888' entity='client.admin' 
Dec  6 04:38:16 np0005548915 systemd[1]: Started libpod-conmon-00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586.scope.
Dec  6 04:38:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:38:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:16 np0005548915 podman[76579]: 2025-12-06 09:38:16.54786142 +0000 UTC m=+0.030081709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:16 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd91377ddd6c61d1f843e66761c7ad3781bbc1962b9e39de8b7fa7a24012ba3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd91377ddd6c61d1f843e66761c7ad3781bbc1962b9e39de8b7fa7a24012ba3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd91377ddd6c61d1f843e66761c7ad3781bbc1962b9e39de8b7fa7a24012ba3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:16 np0005548915 podman[76579]: 2025-12-06 09:38:16.666080122 +0000 UTC m=+0.148300331 container init 00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586 (image=quay.io/ceph/ceph:v19, name=jolly_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec  6 04:38:16 np0005548915 podman[76579]: 2025-12-06 09:38:16.676461664 +0000 UTC m=+0.158681863 container start 00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586 (image=quay.io/ceph/ceph:v19, name=jolly_hypatia, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:38:16 np0005548915 podman[76579]: 2025-12-06 09:38:16.680001794 +0000 UTC m=+0.162221993 container attach 00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586 (image=quay.io/ceph/ceph:v19, name=jolly_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 04:38:16 np0005548915 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  6 04:38:16 np0005548915 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76694 (sysctl)
Dec  6 04:38:17 np0005548915 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  6 04:38:17 np0005548915 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  6 04:38:17 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:38:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec  6 04:38:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:17 np0005548915 systemd[1]: libpod-00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586.scope: Deactivated successfully.
Dec  6 04:38:17 np0005548915 podman[76579]: 2025-12-06 09:38:17.118819682 +0000 UTC m=+0.601039901 container died 00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586 (image=quay.io/ceph/ceph:v19, name=jolly_hypatia, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:38:17 np0005548915 systemd[1]: var-lib-containers-storage-overlay-acd91377ddd6c61d1f843e66761c7ad3781bbc1962b9e39de8b7fa7a24012ba3-merged.mount: Deactivated successfully.
Dec  6 04:38:17 np0005548915 podman[76579]: 2025-12-06 09:38:17.217735477 +0000 UTC m=+0.699955666 container remove 00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586 (image=quay.io/ceph/ceph:v19, name=jolly_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Dec  6 04:38:17 np0005548915 systemd[1]: libpod-conmon-00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586.scope: Deactivated successfully.
Dec  6 04:38:17 np0005548915 podman[76717]: 2025-12-06 09:38:17.284897689 +0000 UTC m=+0.046762785 container create ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281 (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  6 04:38:17 np0005548915 systemd[1]: Started libpod-conmon-ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281.scope.
Dec  6 04:38:17 np0005548915 podman[76717]: 2025-12-06 09:38:17.26343692 +0000 UTC m=+0.025302056 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:17 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:17 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b00e53b083dd17efcde211565a234f8a3da26e47584768b1983508d8e40fcf43/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:17 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b00e53b083dd17efcde211565a234f8a3da26e47584768b1983508d8e40fcf43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:17 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b00e53b083dd17efcde211565a234f8a3da26e47584768b1983508d8e40fcf43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:17 np0005548915 podman[76717]: 2025-12-06 09:38:17.390178868 +0000 UTC m=+0.152043984 container init ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281 (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  6 04:38:17 np0005548915 podman[76717]: 2025-12-06 09:38:17.404892125 +0000 UTC m=+0.166757211 container start ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281 (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:38:17 np0005548915 podman[76717]: 2025-12-06 09:38:17.409630688 +0000 UTC m=+0.171495864 container attach ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281 (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:38:17 np0005548915 ceph-mon[74327]: Saving service crash spec with placement *
Dec  6 04:38:17 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:17 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:17 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:38:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  6 04:38:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:17 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Added label _admin to host compute-0
Dec  6 04:38:17 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec  6 04:38:17 np0005548915 thirsty_joliot[76746]: Added label _admin to host compute-0
Dec  6 04:38:17 np0005548915 systemd[1]: libpod-ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281.scope: Deactivated successfully.
Dec  6 04:38:17 np0005548915 podman[76717]: 2025-12-06 09:38:17.844902107 +0000 UTC m=+0.606767223 container died ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281 (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  6 04:38:17 np0005548915 systemd[1]: var-lib-containers-storage-overlay-b00e53b083dd17efcde211565a234f8a3da26e47584768b1983508d8e40fcf43-merged.mount: Deactivated successfully.
Dec  6 04:38:17 np0005548915 podman[76717]: 2025-12-06 09:38:17.897517486 +0000 UTC m=+0.659382612 container remove ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281 (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:38:17 np0005548915 systemd[1]: libpod-conmon-ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281.scope: Deactivated successfully.
Dec  6 04:38:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:38:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:17 np0005548915 podman[76853]: 2025-12-06 09:38:17.996035692 +0000 UTC m=+0.065135774 container create e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466 (image=quay.io/ceph/ceph:v19, name=silly_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  6 04:38:18 np0005548915 systemd[1]: Started libpod-conmon-e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466.scope.
Dec  6 04:38:18 np0005548915 podman[76853]: 2025-12-06 09:38:17.969568215 +0000 UTC m=+0.038668357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:18 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:18 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3cc80dd3b5423747eee8b2dd692df86bb9fcf0f3a9d6c1b83c734c2b7a4418c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:18 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3cc80dd3b5423747eee8b2dd692df86bb9fcf0f3a9d6c1b83c734c2b7a4418c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:18 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3cc80dd3b5423747eee8b2dd692df86bb9fcf0f3a9d6c1b83c734c2b7a4418c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:18 np0005548915 podman[76853]: 2025-12-06 09:38:18.09767082 +0000 UTC m=+0.166770962 container init e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466 (image=quay.io/ceph/ceph:v19, name=silly_williamson, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:38:18 np0005548915 podman[76853]: 2025-12-06 09:38:18.107568123 +0000 UTC m=+0.176668205 container start e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466 (image=quay.io/ceph/ceph:v19, name=silly_williamson, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  6 04:38:18 np0005548915 podman[76853]: 2025-12-06 09:38:18.111611632 +0000 UTC m=+0.180711714 container attach e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466 (image=quay.io/ceph/ceph:v19, name=silly_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:38:18 np0005548915 podman[76983]: 2025-12-06 09:38:18.484155576 +0000 UTC m=+0.028287005 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:38:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec  6 04:38:18 np0005548915 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  6 04:38:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:38:19 np0005548915 podman[76983]: 2025-12-06 09:38:19.565018939 +0000 UTC m=+1.109150318 container create de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec  6 04:38:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1599237284' entity='client.admin' 
Dec  6 04:38:19 np0005548915 silly_williamson[76896]: set mgr/dashboard/cluster/status
Dec  6 04:38:19 np0005548915 systemd[1]: libpod-e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466.scope: Deactivated successfully.
Dec  6 04:38:20 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:20 np0005548915 ceph-mon[74327]: Added label _admin to host compute-0
Dec  6 04:38:20 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:20 np0005548915 systemd[1]: Started libpod-conmon-de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c.scope.
Dec  6 04:38:20 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:20 np0005548915 podman[76983]: 2025-12-06 09:38:20.151293701 +0000 UTC m=+1.695425090 container init de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  6 04:38:20 np0005548915 podman[76853]: 2025-12-06 09:38:20.153454803 +0000 UTC m=+2.222554895 container died e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466 (image=quay.io/ceph/ceph:v19, name=silly_williamson, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  6 04:38:20 np0005548915 podman[76983]: 2025-12-06 09:38:20.163154123 +0000 UTC m=+1.707285472 container start de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 04:38:20 np0005548915 modest_shockley[77013]: 167 167
Dec  6 04:38:20 np0005548915 podman[76983]: 2025-12-06 09:38:20.170775672 +0000 UTC m=+1.714907051 container attach de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:38:20 np0005548915 systemd[1]: libpod-de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c.scope: Deactivated successfully.
Dec  6 04:38:20 np0005548915 podman[76983]: 2025-12-06 09:38:20.172003076 +0000 UTC m=+1.716134495 container died de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:38:20 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d3cc80dd3b5423747eee8b2dd692df86bb9fcf0f3a9d6c1b83c734c2b7a4418c-merged.mount: Deactivated successfully.
Dec  6 04:38:20 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f273593ec0c5c125b28d4ab6b9662ecd4d14b1752acaf4aa516f0177722e697b-merged.mount: Deactivated successfully.
Dec  6 04:38:20 np0005548915 podman[76983]: 2025-12-06 09:38:20.229634123 +0000 UTC m=+1.773765502 container remove de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:38:20 np0005548915 systemd[1]: libpod-conmon-de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c.scope: Deactivated successfully.
Dec  6 04:38:20 np0005548915 podman[76853]: 2025-12-06 09:38:20.250043362 +0000 UTC m=+2.319143454 container remove e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466 (image=quay.io/ceph/ceph:v19, name=silly_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:38:20 np0005548915 systemd[1]: libpod-conmon-e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466.scope: Deactivated successfully.
Dec  6 04:38:20 np0005548915 podman[77040]: 2025-12-06 09:38:20.513720186 +0000 UTC m=+0.072214243 container create c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feynman, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 04:38:20 np0005548915 systemd[1]: Started libpod-conmon-c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb.scope.
Dec  6 04:38:20 np0005548915 podman[77040]: 2025-12-06 09:38:20.487510534 +0000 UTC m=+0.046004681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:38:20 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:20 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b9dce1518cc29b31df1e3f14e77a3241f334e69f1ce9e25d34806422819da1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:20 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b9dce1518cc29b31df1e3f14e77a3241f334e69f1ce9e25d34806422819da1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:20 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b9dce1518cc29b31df1e3f14e77a3241f334e69f1ce9e25d34806422819da1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:20 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b9dce1518cc29b31df1e3f14e77a3241f334e69f1ce9e25d34806422819da1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:20 np0005548915 podman[77040]: 2025-12-06 09:38:20.615957555 +0000 UTC m=+0.174451682 container init c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feynman, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  6 04:38:20 np0005548915 podman[77040]: 2025-12-06 09:38:20.630053491 +0000 UTC m=+0.188547578 container start c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feynman, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:38:20 np0005548915 podman[77040]: 2025-12-06 09:38:20.6346336 +0000 UTC m=+0.193127687 container attach c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feynman, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 04:38:20 np0005548915 python3[77086]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:38:20 np0005548915 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  6 04:38:20 np0005548915 podman[77091]: 2025-12-06 09:38:20.960186064 +0000 UTC m=+0.056243540 container create 0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f (image=quay.io/ceph/ceph:v19, name=nervous_shirley, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:38:21 np0005548915 systemd[1]: Started libpod-conmon-0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f.scope.
Dec  6 04:38:21 np0005548915 podman[77091]: 2025-12-06 09:38:20.928385543 +0000 UTC m=+0.024443139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:21 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:21 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0712903dfffcecaba63226260143e2bb53420b1b55529c40bc27ddae7f729f3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:21 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0712903dfffcecaba63226260143e2bb53420b1b55529c40bc27ddae7f729f3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:21 np0005548915 podman[77091]: 2025-12-06 09:38:21.066003533 +0000 UTC m=+0.162061059 container init 0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f (image=quay.io/ceph/ceph:v19, name=nervous_shirley, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1599237284' entity='client.admin' 
Dec  6 04:38:21 np0005548915 podman[77091]: 2025-12-06 09:38:21.078966147 +0000 UTC m=+0.175023623 container start 0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f (image=quay.io/ceph/ceph:v19, name=nervous_shirley, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 04:38:21 np0005548915 podman[77091]: 2025-12-06 09:38:21.08272144 +0000 UTC m=+0.178778906 container attach 0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f (image=quay.io/ceph/ceph:v19, name=nervous_shirley, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1942879497' entity='client.admin' 
Dec  6 04:38:21 np0005548915 systemd[1]: libpod-0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f.scope: Deactivated successfully.
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]: [
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:    {
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:        "available": false,
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:        "being_replaced": false,
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:        "ceph_device_lvm": false,
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:        "lsm_data": {},
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:        "lvs": [],
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:        "path": "/dev/sr0",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:        "rejected_reasons": [
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "Has a FileSystem",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "Insufficient space (<5GB)"
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:        ],
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:        "sys_api": {
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "actuators": null,
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "device_nodes": [
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:                "sr0"
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            ],
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "devname": "sr0",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "human_readable_size": "482.00 KB",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "id_bus": "ata",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "model": "QEMU DVD-ROM",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "nr_requests": "2",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "parent": "/dev/sr0",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "partitions": {},
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "path": "/dev/sr0",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "removable": "1",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "rev": "2.5+",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "ro": "0",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "rotational": "1",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "sas_address": "",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "sas_device_handle": "",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "scheduler_mode": "mq-deadline",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "sectors": 0,
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "sectorsize": "2048",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "size": 493568.0,
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "support_discard": "2048",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "type": "disk",
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:            "vendor": "QEMU"
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:        }
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]:    }
Dec  6 04:38:21 np0005548915 priceless_feynman[77056]: ]
Dec  6 04:38:21 np0005548915 podman[78120]: 2025-12-06 09:38:21.566000348 +0000 UTC m=+0.030983157 container died 0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f (image=quay.io/ceph/ceph:v19, name=nervous_shirley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  6 04:38:21 np0005548915 systemd[1]: libpod-c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb.scope: Deactivated successfully.
Dec  6 04:38:21 np0005548915 podman[77040]: 2025-12-06 09:38:21.576720948 +0000 UTC m=+1.135215035 container died c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  6 04:38:21 np0005548915 systemd[1]: var-lib-containers-storage-overlay-b0712903dfffcecaba63226260143e2bb53420b1b55529c40bc27ddae7f729f3-merged.mount: Deactivated successfully.
Dec  6 04:38:21 np0005548915 systemd[1]: var-lib-containers-storage-overlay-06b9dce1518cc29b31df1e3f14e77a3241f334e69f1ce9e25d34806422819da1-merged.mount: Deactivated successfully.
Dec  6 04:38:21 np0005548915 podman[78120]: 2025-12-06 09:38:21.61418319 +0000 UTC m=+0.079166029 container remove 0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f (image=quay.io/ceph/ceph:v19, name=nervous_shirley, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  6 04:38:21 np0005548915 systemd[1]: libpod-conmon-0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f.scope: Deactivated successfully.
Dec  6 04:38:21 np0005548915 podman[77040]: 2025-12-06 09:38:21.634104179 +0000 UTC m=+1.192598266 container remove c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feynman, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:38:21 np0005548915 systemd[1]: libpod-conmon-c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb.scope: Deactivated successfully.
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:38:21 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:38:21 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  6 04:38:21 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  6 04:38:22 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1942879497' entity='client.admin' 
Dec  6 04:38:22 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:22 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:22 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:22 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:22 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 04:38:22 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:38:22 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:38:22 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:38:22 np0005548915 ansible-async_wrapper.py[78700]: Invoked with j894321523705 30 /home/zuul/.ansible/tmp/ansible-tmp-1765013902.0274796-37079-11437068814984/AnsiballZ_command.py _
Dec  6 04:38:22 np0005548915 ansible-async_wrapper.py[78768]: Starting module and watcher
Dec  6 04:38:22 np0005548915 ansible-async_wrapper.py[78768]: Start watching 78769 (30)
Dec  6 04:38:22 np0005548915 ansible-async_wrapper.py[78769]: Start module (78769)
Dec  6 04:38:22 np0005548915 ansible-async_wrapper.py[78700]: Return async_wrapper task started.
Dec  6 04:38:22 np0005548915 ceph-mgr[74618]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec  6 04:38:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:22 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  6 04:38:23 np0005548915 python3[78770]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:38:23 np0005548915 podman[78834]: 2025-12-06 09:38:23.068524621 +0000 UTC m=+0.049302385 container create 88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8 (image=quay.io/ceph/ceph:v19, name=interesting_chatelet, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  6 04:38:23 np0005548915 systemd[1]: Started libpod-conmon-88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8.scope.
Dec  6 04:38:23 np0005548915 podman[78834]: 2025-12-06 09:38:23.047236374 +0000 UTC m=+0.028014148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:23 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:23 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200bb43861020f192ae49c17b3ae495bd935e1df3973fe1382f394801c7bc23b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:23 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200bb43861020f192ae49c17b3ae495bd935e1df3973fe1382f394801c7bc23b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:23 np0005548915 podman[78834]: 2025-12-06 09:38:23.172039255 +0000 UTC m=+0.152817089 container init 88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8 (image=quay.io/ceph/ceph:v19, name=interesting_chatelet, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  6 04:38:23 np0005548915 podman[78834]: 2025-12-06 09:38:23.180368108 +0000 UTC m=+0.161145862 container start 88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8 (image=quay.io/ceph/ceph:v19, name=interesting_chatelet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  6 04:38:23 np0005548915 podman[78834]: 2025-12-06 09:38:23.184832095 +0000 UTC m=+0.165609889 container attach 88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8 (image=quay.io/ceph/ceph:v19, name=interesting_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 04:38:23 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:38:23 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:38:23 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  6 04:38:23 np0005548915 interesting_chatelet[78882]: 
Dec  6 04:38:23 np0005548915 interesting_chatelet[78882]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  6 04:38:23 np0005548915 systemd[1]: libpod-88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8.scope: Deactivated successfully.
Dec  6 04:38:23 np0005548915 podman[78834]: 2025-12-06 09:38:23.548983625 +0000 UTC m=+0.529761439 container died 88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8 (image=quay.io/ceph/ceph:v19, name=interesting_chatelet, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  6 04:38:23 np0005548915 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.conf
Dec  6 04:38:23 np0005548915 ceph-mon[74327]: Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:38:23 np0005548915 ceph-mon[74327]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  6 04:38:23 np0005548915 systemd[1]: var-lib-containers-storage-overlay-200bb43861020f192ae49c17b3ae495bd935e1df3973fe1382f394801c7bc23b-merged.mount: Deactivated successfully.
Dec  6 04:38:23 np0005548915 podman[78834]: 2025-12-06 09:38:23.797739947 +0000 UTC m=+0.778517741 container remove 88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8 (image=quay.io/ceph/ceph:v19, name=interesting_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 04:38:23 np0005548915 systemd[1]: libpod-conmon-88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8.scope: Deactivated successfully.
Dec  6 04:38:23 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:38:23 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:38:23 np0005548915 ansible-async_wrapper.py[78769]: Module complete (78769)
Dec  6 04:38:24 np0005548915 python3[79327]: ansible-ansible.legacy.async_status Invoked with jid=j894321523705.78700 mode=status _async_dir=/root/.ansible_async
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:24 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 4a51f2f8-7a09-415b-a5f4-d025247f5419 (Updating crash deployment (+1 -> 1))
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:38:24 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec  6 04:38:24 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec  6 04:38:24 np0005548915 python3[79491]: ansible-ansible.legacy.async_status Invoked with jid=j894321523705.78700 mode=cleanup _async_dir=/root/.ansible_async
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  6 04:38:24 np0005548915 ceph-mon[74327]: Deploying daemon crash.compute-0 on compute-0
Dec  6 04:38:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:25 np0005548915 podman[79608]: 2025-12-06 09:38:25.128837151 +0000 UTC m=+0.062815559 container create 833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  6 04:38:25 np0005548915 systemd[1]: Started libpod-conmon-833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7.scope.
Dec  6 04:38:25 np0005548915 python3[79596]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  6 04:38:25 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:25 np0005548915 podman[79608]: 2025-12-06 09:38:25.101906604 +0000 UTC m=+0.035885102 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:38:25 np0005548915 podman[79608]: 2025-12-06 09:38:25.205680323 +0000 UTC m=+0.139658751 container init 833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_newton, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  6 04:38:25 np0005548915 podman[79608]: 2025-12-06 09:38:25.214775301 +0000 UTC m=+0.148753709 container start 833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_newton, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:38:25 np0005548915 podman[79608]: 2025-12-06 09:38:25.218064695 +0000 UTC m=+0.152043143 container attach 833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_newton, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  6 04:38:25 np0005548915 lucid_newton[79625]: 167 167
Dec  6 04:38:25 np0005548915 systemd[1]: libpod-833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7.scope: Deactivated successfully.
Dec  6 04:38:25 np0005548915 podman[79608]: 2025-12-06 09:38:25.220093105 +0000 UTC m=+0.154071553 container died 833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  6 04:38:25 np0005548915 systemd[1]: var-lib-containers-storage-overlay-28fa271157de2fcce9f7f4aafbfaf62f094f06a4bbb0dce06e5e54abc1c29d68-merged.mount: Deactivated successfully.
Dec  6 04:38:25 np0005548915 podman[79608]: 2025-12-06 09:38:25.264471452 +0000 UTC m=+0.198449860 container remove 833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_newton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:38:25 np0005548915 systemd[1]: libpod-conmon-833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7.scope: Deactivated successfully.
Dec  6 04:38:25 np0005548915 systemd[1]: Reloading.
Dec  6 04:38:25 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:38:25 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:38:25 np0005548915 systemd[1]: Reloading.
Dec  6 04:38:25 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:38:25 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:38:25 np0005548915 python3[79711]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:38:25 np0005548915 podman[79747]: 2025-12-06 09:38:25.823682346 +0000 UTC m=+0.060229359 container create 478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f (image=quay.io/ceph/ceph:v19, name=optimistic_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:38:25 np0005548915 systemd[1]: Started libpod-conmon-478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f.scope.
Dec  6 04:38:25 np0005548915 systemd[1]: Starting Ceph crash.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:38:25 np0005548915 podman[79747]: 2025-12-06 09:38:25.791063468 +0000 UTC m=+0.027610561 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:25 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:25 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dbe7a641880a86280bcf17763495ea15c22467a73183d26fb3efc1d2a745dcb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:25 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dbe7a641880a86280bcf17763495ea15c22467a73183d26fb3efc1d2a745dcb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:25 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dbe7a641880a86280bcf17763495ea15c22467a73183d26fb3efc1d2a745dcb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:25 np0005548915 podman[79747]: 2025-12-06 09:38:25.914028222 +0000 UTC m=+0.150575245 container init 478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f (image=quay.io/ceph/ceph:v19, name=optimistic_sutherland, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:38:25 np0005548915 podman[79747]: 2025-12-06 09:38:25.925304222 +0000 UTC m=+0.161851235 container start 478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f (image=quay.io/ceph/ceph:v19, name=optimistic_sutherland, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:38:25 np0005548915 podman[79747]: 2025-12-06 09:38:25.929202978 +0000 UTC m=+0.165750001 container attach 478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f (image=quay.io/ceph/ceph:v19, name=optimistic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  6 04:38:26 np0005548915 podman[79835]: 2025-12-06 09:38:26.216766873 +0000 UTC m=+0.066789692 container create aa22500c4f14e1b782cb19f95006facaf1989e4bc9c84e60fe7f7e18e984493f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 04:38:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a38ece6b245c2a764254ee70099ad7f8266ee0c43aca1a4471a6fc5e4985f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a38ece6b245c2a764254ee70099ad7f8266ee0c43aca1a4471a6fc5e4985f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a38ece6b245c2a764254ee70099ad7f8266ee0c43aca1a4471a6fc5e4985f6/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a38ece6b245c2a764254ee70099ad7f8266ee0c43aca1a4471a6fc5e4985f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:26 np0005548915 podman[79835]: 2025-12-06 09:38:26.280352038 +0000 UTC m=+0.130374887 container init aa22500c4f14e1b782cb19f95006facaf1989e4bc9c84e60fe7f7e18e984493f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:38:26 np0005548915 podman[79835]: 2025-12-06 09:38:26.193333428 +0000 UTC m=+0.043356297 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:38:26 np0005548915 podman[79835]: 2025-12-06 09:38:26.286996115 +0000 UTC m=+0.137018934 container start aa22500c4f14e1b782cb19f95006facaf1989e4bc9c84e60fe7f7e18e984493f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:38:26 np0005548915 bash[79835]: aa22500c4f14e1b782cb19f95006facaf1989e4bc9c84e60fe7f7e18e984493f
Dec  6 04:38:26 np0005548915 systemd[1]: Started Ceph crash.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:38:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:38:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: INFO:ceph-crash:pinging cluster to exercise our key
Dec  6 04:38:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:38:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  6 04:38:26 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  6 04:38:26 np0005548915 optimistic_sutherland[79765]: 
Dec  6 04:38:26 np0005548915 optimistic_sutherland[79765]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  6 04:38:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:26 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 4a51f2f8-7a09-415b-a5f4-d025247f5419 (Updating crash deployment (+1 -> 1))
Dec  6 04:38:26 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 4a51f2f8-7a09-415b-a5f4-d025247f5419 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec  6 04:38:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  6 04:38:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  6 04:38:26 np0005548915 systemd[1]: libpod-478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f.scope: Deactivated successfully.
Dec  6 04:38:26 np0005548915 podman[79747]: 2025-12-06 09:38:26.386270162 +0000 UTC m=+0.622817175 container died 478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f (image=quay.io/ceph/ceph:v19, name=optimistic_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:38:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  6 04:38:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:26 np0005548915 systemd[1]: var-lib-containers-storage-overlay-4dbe7a641880a86280bcf17763495ea15c22467a73183d26fb3efc1d2a745dcb-merged.mount: Deactivated successfully.
Dec  6 04:38:26 np0005548915 podman[79747]: 2025-12-06 09:38:26.430337517 +0000 UTC m=+0.666884520 container remove 478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f (image=quay.io/ceph/ceph:v19, name=optimistic_sutherland, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  6 04:38:26 np0005548915 systemd[1]: libpod-conmon-478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f.scope: Deactivated successfully.
Dec  6 04:38:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: 2025-12-06T09:38:26.493+0000 7fc7dd97d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  6 04:38:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: 2025-12-06T09:38:26.493+0000 7fc7dd97d640 -1 AuthRegistry(0x7fc7d80698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  6 04:38:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: 2025-12-06T09:38:26.494+0000 7fc7dd97d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  6 04:38:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: 2025-12-06T09:38:26.494+0000 7fc7dd97d640 -1 AuthRegistry(0x7fc7dd97bff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  6 04:38:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: 2025-12-06T09:38:26.495+0000 7fc7d6ffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec  6 04:38:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: 2025-12-06T09:38:26.495+0000 7fc7dd97d640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec  6 04:38:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec  6 04:38:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec  6 04:38:26 np0005548915 python3[79980]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:38:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:26 np0005548915 podman[80006]: 2025-12-06 09:38:26.939589516 +0000 UTC m=+0.039581686 container create 609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933 (image=quay.io/ceph/ceph:v19, name=sharp_lewin, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:38:26 np0005548915 systemd[1]: Started libpod-conmon-609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933.scope.
Dec  6 04:38:26 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc13d9c12d07fbf4876ab6434eee0bf202b6fc2f931d342e84abbe2420d2f96/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc13d9c12d07fbf4876ab6434eee0bf202b6fc2f931d342e84abbe2420d2f96/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc13d9c12d07fbf4876ab6434eee0bf202b6fc2f931d342e84abbe2420d2f96/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:27 np0005548915 podman[80006]: 2025-12-06 09:38:27.009039718 +0000 UTC m=+0.109031898 container init 609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933 (image=quay.io/ceph/ceph:v19, name=sharp_lewin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:38:27 np0005548915 podman[80006]: 2025-12-06 09:38:27.014315389 +0000 UTC m=+0.114307549 container start 609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933 (image=quay.io/ceph/ceph:v19, name=sharp_lewin, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  6 04:38:27 np0005548915 podman[80006]: 2025-12-06 09:38:26.920564649 +0000 UTC m=+0.020556829 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:27 np0005548915 podman[80006]: 2025-12-06 09:38:27.017999557 +0000 UTC m=+0.117991727 container attach 609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933 (image=quay.io/ceph/ceph:v19, name=sharp_lewin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 04:38:27 np0005548915 podman[80092]: 2025-12-06 09:38:27.222171291 +0000 UTC m=+0.055052929 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:38:27 np0005548915 podman[80092]: 2025-12-06 09:38:27.34589835 +0000 UTC m=+0.178780018 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/426016775' entity='client.admin' 
Dec  6 04:38:27 np0005548915 systemd[1]: libpod-609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933.scope: Deactivated successfully.
Dec  6 04:38:27 np0005548915 podman[80006]: 2025-12-06 09:38:27.413924063 +0000 UTC m=+0.513916233 container died 609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933 (image=quay.io/ceph/ceph:v19, name=sharp_lewin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  6 04:38:27 np0005548915 systemd[1]: var-lib-containers-storage-overlay-dfc13d9c12d07fbf4876ab6434eee0bf202b6fc2f931d342e84abbe2420d2f96-merged.mount: Deactivated successfully.
Dec  6 04:38:27 np0005548915 podman[80006]: 2025-12-06 09:38:27.46252695 +0000 UTC m=+0.562519150 container remove 609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933 (image=quay.io/ceph/ceph:v19, name=sharp_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 04:38:27 np0005548915 systemd[1]: libpod-conmon-609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933.scope: Deactivated successfully.
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:27 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec  6 04:38:27 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:38:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:38:27 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec  6 04:38:27 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec  6 04:38:27 np0005548915 python3[80201]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:38:27 np0005548915 ansible-async_wrapper.py[78768]: Done in kid B.
Dec  6 04:38:27 np0005548915 podman[80243]: 2025-12-06 09:38:27.908791859 +0000 UTC m=+0.055658435 container create 8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b (image=quay.io/ceph/ceph:v19, name=intelligent_volhard, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  6 04:38:27 np0005548915 systemd[1]: Started libpod-conmon-8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b.scope.
Dec  6 04:38:27 np0005548915 podman[80243]: 2025-12-06 09:38:27.886983377 +0000 UTC m=+0.033849953 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:27 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1e2be58a6aa5c4dfcdb576667bfb1d992a2c4792b675d96936b00150c738b1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1e2be58a6aa5c4dfcdb576667bfb1d992a2c4792b675d96936b00150c738b1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1e2be58a6aa5c4dfcdb576667bfb1d992a2c4792b675d96936b00150c738b1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:28 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 1 completed events
Dec  6 04:38:28 np0005548915 podman[80243]: 2025-12-06 09:38:28.01045951 +0000 UTC m=+0.157326096 container init 8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b (image=quay.io/ceph/ceph:v19, name=intelligent_volhard, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:28 np0005548915 podman[80243]: 2025-12-06 09:38:28.021679389 +0000 UTC m=+0.168545935 container start 8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b (image=quay.io/ceph/ceph:v19, name=intelligent_volhard, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:38:28 np0005548915 podman[80243]: 2025-12-06 09:38:28.025579763 +0000 UTC m=+0.172446299 container attach 8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b (image=quay.io/ceph/ceph:v19, name=intelligent_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  6 04:38:28 np0005548915 podman[80327]: 2025-12-06 09:38:28.377691541 +0000 UTC m=+0.067304475 container create d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511 (image=quay.io/ceph/ceph:v19, name=clever_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/426016775' entity='client.admin' 
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:28 np0005548915 systemd[1]: Started libpod-conmon-d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511.scope.
Dec  6 04:38:28 np0005548915 podman[80327]: 2025-12-06 09:38:28.350321741 +0000 UTC m=+0.039934705 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3562087741' entity='client.admin' 
Dec  6 04:38:28 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:28 np0005548915 podman[80327]: 2025-12-06 09:38:28.46801093 +0000 UTC m=+0.157623844 container init d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511 (image=quay.io/ceph/ceph:v19, name=clever_snyder, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  6 04:38:28 np0005548915 podman[80327]: 2025-12-06 09:38:28.472866879 +0000 UTC m=+0.162479783 container start d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511 (image=quay.io/ceph/ceph:v19, name=clever_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  6 04:38:28 np0005548915 podman[80327]: 2025-12-06 09:38:28.476692072 +0000 UTC m=+0.166304986 container attach d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511 (image=quay.io/ceph/ceph:v19, name=clever_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  6 04:38:28 np0005548915 clever_snyder[80345]: 167 167
Dec  6 04:38:28 np0005548915 systemd[1]: libpod-d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511.scope: Deactivated successfully.
Dec  6 04:38:28 np0005548915 conmon[80345]: conmon d75a2ad5e03ce26641b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511.scope/container/memory.events
Dec  6 04:38:28 np0005548915 podman[80327]: 2025-12-06 09:38:28.479418955 +0000 UTC m=+0.169031889 container died d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511 (image=quay.io/ceph/ceph:v19, name=clever_snyder, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:38:28 np0005548915 systemd[1]: libpod-8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b.scope: Deactivated successfully.
Dec  6 04:38:28 np0005548915 podman[80243]: 2025-12-06 09:38:28.489900653 +0000 UTC m=+0.636767229 container died 8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b (image=quay.io/ceph/ceph:v19, name=intelligent_volhard, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:38:28 np0005548915 systemd[1]: var-lib-containers-storage-overlay-887376de203f97c8c70cb78eb36d24285d147b004af347a4a412a35343be5383-merged.mount: Deactivated successfully.
Dec  6 04:38:28 np0005548915 systemd[1]: var-lib-containers-storage-overlay-bd1e2be58a6aa5c4dfcdb576667bfb1d992a2c4792b675d96936b00150c738b1-merged.mount: Deactivated successfully.
Dec  6 04:38:28 np0005548915 podman[80327]: 2025-12-06 09:38:28.545776813 +0000 UTC m=+0.235389757 container remove d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511 (image=quay.io/ceph/ceph:v19, name=clever_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 04:38:28 np0005548915 systemd[1]: libpod-conmon-d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511.scope: Deactivated successfully.
Dec  6 04:38:28 np0005548915 podman[80243]: 2025-12-06 09:38:28.565291384 +0000 UTC m=+0.712157940 container remove 8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b (image=quay.io/ceph/ceph:v19, name=intelligent_volhard, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 04:38:28 np0005548915 systemd[1]: libpod-conmon-8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b.scope: Deactivated successfully.
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:28 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.qhdjwa (unknown last config time)...
Dec  6 04:38:28 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.qhdjwa (unknown last config time)...
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.qhdjwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.qhdjwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:38:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:38:28 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.qhdjwa on compute-0
Dec  6 04:38:28 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.qhdjwa on compute-0
Dec  6 04:38:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:29 np0005548915 python3[80451]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:38:29 np0005548915 podman[80466]: 2025-12-06 09:38:29.128033579 +0000 UTC m=+0.078054443 container create cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99 (image=quay.io/ceph/ceph:v19, name=condescending_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  6 04:38:29 np0005548915 podman[80474]: 2025-12-06 09:38:29.167318456 +0000 UTC m=+0.074781665 container create 46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265 (image=quay.io/ceph/ceph:v19, name=elegant_rubin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  6 04:38:29 np0005548915 systemd[1]: Started libpod-conmon-cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99.scope.
Dec  6 04:38:29 np0005548915 podman[80466]: 2025-12-06 09:38:29.09469199 +0000 UTC m=+0.044712904 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:29 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:29 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d927b3ec0898bcabfe9e949aa81c1c2854bcbc5b04487a465803cae913963e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:29 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d927b3ec0898bcabfe9e949aa81c1c2854bcbc5b04487a465803cae913963e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:29 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d927b3ec0898bcabfe9e949aa81c1c2854bcbc5b04487a465803cae913963e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:29 np0005548915 systemd[1]: Started libpod-conmon-46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265.scope.
Dec  6 04:38:29 np0005548915 podman[80474]: 2025-12-06 09:38:29.127598487 +0000 UTC m=+0.035061746 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:29 np0005548915 podman[80466]: 2025-12-06 09:38:29.228131318 +0000 UTC m=+0.178152172 container init cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99 (image=quay.io/ceph/ceph:v19, name=condescending_jemison, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  6 04:38:29 np0005548915 podman[80466]: 2025-12-06 09:38:29.234146648 +0000 UTC m=+0.184167502 container start cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99 (image=quay.io/ceph/ceph:v19, name=condescending_jemison, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:38:29 np0005548915 podman[80466]: 2025-12-06 09:38:29.23836856 +0000 UTC m=+0.188389394 container attach cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99 (image=quay.io/ceph/ceph:v19, name=condescending_jemison, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:38:29 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:29 np0005548915 podman[80474]: 2025-12-06 09:38:29.26273806 +0000 UTC m=+0.170201229 container init 46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265 (image=quay.io/ceph/ceph:v19, name=elegant_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:38:29 np0005548915 podman[80474]: 2025-12-06 09:38:29.271877144 +0000 UTC m=+0.179340313 container start 46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265 (image=quay.io/ceph/ceph:v19, name=elegant_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:38:29 np0005548915 podman[80474]: 2025-12-06 09:38:29.275391668 +0000 UTC m=+0.182854837 container attach 46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265 (image=quay.io/ceph/ceph:v19, name=elegant_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 04:38:29 np0005548915 elegant_rubin[80502]: 167 167
Dec  6 04:38:29 np0005548915 systemd[1]: libpod-46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265.scope: Deactivated successfully.
Dec  6 04:38:29 np0005548915 podman[80474]: 2025-12-06 09:38:29.278284065 +0000 UTC m=+0.185747274 container died 46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265 (image=quay.io/ceph/ceph:v19, name=elegant_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:38:29 np0005548915 systemd[1]: var-lib-containers-storage-overlay-cd4964552541d83f3c136d219ca6d3865ad225f064a9cb88d54569c63b7e166c-merged.mount: Deactivated successfully.
Dec  6 04:38:29 np0005548915 podman[80474]: 2025-12-06 09:38:29.326995524 +0000 UTC m=+0.234458733 container remove 46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265 (image=quay.io/ceph/ceph:v19, name=elegant_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 04:38:29 np0005548915 systemd[1]: libpod-conmon-46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265.scope: Deactivated successfully.
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/3562087741' entity='client.admin' 
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: Reconfiguring mgr.compute-0.qhdjwa (unknown last config time)...
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.qhdjwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: Reconfiguring daemon mgr.compute-0.qhdjwa on compute-0
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec  6 04:38:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2845166347' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  6 04:38:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec  6 04:38:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  6 04:38:30 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2845166347' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  6 04:38:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2845166347' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  6 04:38:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec  6 04:38:30 np0005548915 condescending_jemison[80494]: set require_min_compat_client to mimic
Dec  6 04:38:30 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec  6 04:38:30 np0005548915 systemd[1]: libpod-cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99.scope: Deactivated successfully.
Dec  6 04:38:30 np0005548915 podman[80466]: 2025-12-06 09:38:30.659540024 +0000 UTC m=+1.609560908 container died cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99 (image=quay.io/ceph/ceph:v19, name=condescending_jemison, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 04:38:30 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d3d927b3ec0898bcabfe9e949aa81c1c2854bcbc5b04487a465803cae913963e-merged.mount: Deactivated successfully.
Dec  6 04:38:30 np0005548915 podman[80466]: 2025-12-06 09:38:30.709677681 +0000 UTC m=+1.659698545 container remove cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99 (image=quay.io/ceph/ceph:v19, name=condescending_jemison, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:38:30 np0005548915 systemd[1]: libpod-conmon-cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99.scope: Deactivated successfully.
Dec  6 04:38:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:31 np0005548915 python3[80604]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:38:31 np0005548915 podman[80605]: 2025-12-06 09:38:31.446901408 +0000 UTC m=+0.044427305 container create 3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe (image=quay.io/ceph/ceph:v19, name=practical_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec  6 04:38:31 np0005548915 systemd[1]: Started libpod-conmon-3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe.scope.
Dec  6 04:38:31 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6dedaf827b65f469a2686e9984db333c89b99e6fa5e4f2c419d00e1b3b34d9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6dedaf827b65f469a2686e9984db333c89b99e6fa5e4f2c419d00e1b3b34d9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6dedaf827b65f469a2686e9984db333c89b99e6fa5e4f2c419d00e1b3b34d9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:31 np0005548915 podman[80605]: 2025-12-06 09:38:31.429392302 +0000 UTC m=+0.026918179 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:31 np0005548915 podman[80605]: 2025-12-06 09:38:31.537406671 +0000 UTC m=+0.134932628 container init 3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe (image=quay.io/ceph/ceph:v19, name=practical_pascal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:38:31 np0005548915 podman[80605]: 2025-12-06 09:38:31.547026328 +0000 UTC m=+0.144552215 container start 3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe (image=quay.io/ceph/ceph:v19, name=practical_pascal, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 04:38:31 np0005548915 podman[80605]: 2025-12-06 09:38:31.551431465 +0000 UTC m=+0.148957332 container attach 3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe (image=quay.io/ceph/ceph:v19, name=practical_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:38:31 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2845166347' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  6 04:38:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:32 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Added host compute-0
Dec  6 04:38:32 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:38:32 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:38:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:38:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:38:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:38:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:38:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:38:33 np0005548915 ceph-mon[74327]: Added host compute-0
Dec  6 04:38:33 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Dec  6 04:38:33 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Dec  6 04:38:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:38:34 np0005548915 ceph-mon[74327]: Deploying cephadm binary to compute-1
Dec  6 04:38:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  6 04:38:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:38 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Added host compute-1
Dec  6 04:38:38 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Added host compute-1
Dec  6 04:38:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:38:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:39 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:39 np0005548915 ceph-mon[74327]: Added host compute-1
Dec  6 04:38:39 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:38:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:38:39 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Dec  6 04:38:39 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Dec  6 04:38:40 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:38:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:41 np0005548915 ceph-mon[74327]: Deploying cephadm binary to compute-2
Dec  6 04:38:41 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  6 04:38:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:43 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Added host compute-2
Dec  6 04:38:43 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Added host compute-2
Dec  6 04:38:43 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Dec  6 04:38:43 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Dec  6 04:38:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  6 04:38:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:43 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec  6 04:38:43 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec  6 04:38:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  6 04:38:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:43 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec  6 04:38:43 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec  6 04:38:43 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Dec  6 04:38:43 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Dec  6 04:38:43 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec  6 04:38:43 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec  6 04:38:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec  6 04:38:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:43 np0005548915 practical_pascal[80620]: Added host 'compute-0' with addr '192.168.122.100'
Dec  6 04:38:43 np0005548915 practical_pascal[80620]: Added host 'compute-1' with addr '192.168.122.101'
Dec  6 04:38:43 np0005548915 practical_pascal[80620]: Added host 'compute-2' with addr '192.168.122.102'
Dec  6 04:38:43 np0005548915 practical_pascal[80620]: Scheduled mon update...
Dec  6 04:38:43 np0005548915 practical_pascal[80620]: Scheduled mgr update...
Dec  6 04:38:43 np0005548915 practical_pascal[80620]: Scheduled osd.default_drive_group update...
Dec  6 04:38:43 np0005548915 systemd[1]: libpod-3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe.scope: Deactivated successfully.
Dec  6 04:38:43 np0005548915 podman[80605]: 2025-12-06 09:38:43.901728115 +0000 UTC m=+12.499253992 container died 3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe (image=quay.io/ceph/ceph:v19, name=practical_pascal, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  6 04:38:43 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6d6dedaf827b65f469a2686e9984db333c89b99e6fa5e4f2c419d00e1b3b34d9-merged.mount: Deactivated successfully.
Dec  6 04:38:43 np0005548915 podman[80605]: 2025-12-06 09:38:43.947387332 +0000 UTC m=+12.544913189 container remove 3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe (image=quay.io/ceph/ceph:v19, name=practical_pascal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:38:43 np0005548915 systemd[1]: libpod-conmon-3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe.scope: Deactivated successfully.
Dec  6 04:38:44 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:44 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:44 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:44 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:38:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:38:44 np0005548915 python3[80779]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:38:44 np0005548915 podman[80781]: 2025-12-06 09:38:44.48322846 +0000 UTC m=+0.070715456 container create 464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d (image=quay.io/ceph/ceph:v19, name=great_turing, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec  6 04:38:44 np0005548915 systemd[1]: Started libpod-conmon-464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d.scope.
Dec  6 04:38:44 np0005548915 podman[80781]: 2025-12-06 09:38:44.451961057 +0000 UTC m=+0.039448033 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:38:44 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:38:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/534077bae119f696cc24a3307f2808b1ebdb1b43e4c24155cacd9be6e0464ea6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/534077bae119f696cc24a3307f2808b1ebdb1b43e4c24155cacd9be6e0464ea6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/534077bae119f696cc24a3307f2808b1ebdb1b43e4c24155cacd9be6e0464ea6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:38:44 np0005548915 podman[80781]: 2025-12-06 09:38:44.599306226 +0000 UTC m=+0.186793202 container init 464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d (image=quay.io/ceph/ceph:v19, name=great_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  6 04:38:44 np0005548915 podman[80781]: 2025-12-06 09:38:44.608626494 +0000 UTC m=+0.196113470 container start 464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d (image=quay.io/ceph/ceph:v19, name=great_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:38:44 np0005548915 podman[80781]: 2025-12-06 09:38:44.612860727 +0000 UTC m=+0.200347713 container attach 464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d (image=quay.io/ceph/ceph:v19, name=great_turing, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:38:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  6 04:38:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2640512377' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  6 04:38:45 np0005548915 great_turing[80798]: 
Dec  6 04:38:45 np0005548915 great_turing[80798]: {"fsid":"5ecd3f74-dade-5fc4-92ce-8950ae424258","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":61,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-06T09:37:41:285728+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-06T09:37:41.289249+0000","services":{}},"progress_events":{}}
Dec  6 04:38:45 np0005548915 systemd[1]: libpod-464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d.scope: Deactivated successfully.
Dec  6 04:38:45 np0005548915 podman[80781]: 2025-12-06 09:38:45.087913124 +0000 UTC m=+0.675400100 container died 464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d (image=quay.io/ceph/ceph:v19, name=great_turing, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:38:45 np0005548915 systemd[1]: var-lib-containers-storage-overlay-534077bae119f696cc24a3307f2808b1ebdb1b43e4c24155cacd9be6e0464ea6-merged.mount: Deactivated successfully.
Dec  6 04:38:45 np0005548915 podman[80781]: 2025-12-06 09:38:45.140725152 +0000 UTC m=+0.728212118 container remove 464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d (image=quay.io/ceph/ceph:v19, name=great_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:38:45 np0005548915 systemd[1]: libpod-conmon-464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d.scope: Deactivated successfully.
Dec  6 04:38:45 np0005548915 ceph-mon[74327]: Added host compute-2
Dec  6 04:38:45 np0005548915 ceph-mon[74327]: Saving service mon spec with placement compute-0;compute-1;compute-2
Dec  6 04:38:45 np0005548915 ceph-mon[74327]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec  6 04:38:45 np0005548915 ceph-mon[74327]: Marking host: compute-0 for OSDSpec preview refresh.
Dec  6 04:38:45 np0005548915 ceph-mon[74327]: Marking host: compute-1 for OSDSpec preview refresh.
Dec  6 04:38:45 np0005548915 ceph-mon[74327]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec  6 04:38:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:38:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:38:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:38:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:39:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:02 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:39:02
Dec  6 04:39:02 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:39:02 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:39:02 np0005548915 ceph-mgr[74618]: [balancer INFO root] No pools available
Dec  6 04:39:03 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:39:03 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:39:03 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:39:03 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:39:03 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:39:03 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:39:03 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:39:03 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:39:03 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:39:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:39:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:39:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:39:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:39:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:39:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:39:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:39:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  6 04:39:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 04:39:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:39:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:39:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:39:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:39:14 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  6 04:39:14 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  6 04:39:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:15 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:39:15 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:39:15 np0005548915 python3[80860]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:39:15 np0005548915 podman[80862]: 2025-12-06 09:39:15.473647367 +0000 UTC m=+0.049802750 container create 5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b (image=quay.io/ceph/ceph:v19, name=mystifying_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  6 04:39:15 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:15 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:15 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:15 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:15 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 04:39:15 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:39:15 np0005548915 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.conf
Dec  6 04:39:15 np0005548915 systemd[1]: Started libpod-conmon-5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b.scope.
Dec  6 04:39:15 np0005548915 podman[80862]: 2025-12-06 09:39:15.449007545 +0000 UTC m=+0.025162938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:39:15 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0728f570ce10d27dca1ab84ba1441b24576c04ddb35bf0d559f85a6eb37e8825/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0728f570ce10d27dca1ab84ba1441b24576c04ddb35bf0d559f85a6eb37e8825/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0728f570ce10d27dca1ab84ba1441b24576c04ddb35bf0d559f85a6eb37e8825/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:15 np0005548915 podman[80862]: 2025-12-06 09:39:15.566042495 +0000 UTC m=+0.142197868 container init 5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b (image=quay.io/ceph/ceph:v19, name=mystifying_mendel, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:39:15 np0005548915 podman[80862]: 2025-12-06 09:39:15.581863142 +0000 UTC m=+0.158018495 container start 5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b (image=quay.io/ceph/ceph:v19, name=mystifying_mendel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Dec  6 04:39:15 np0005548915 podman[80862]: 2025-12-06 09:39:15.58559944 +0000 UTC m=+0.161754793 container attach 5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b (image=quay.io/ceph/ceph:v19, name=mystifying_mendel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  6 04:39:15 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:39:15 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:39:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  6 04:39:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1345211581' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  6 04:39:16 np0005548915 mystifying_mendel[80878]: 
Dec  6 04:39:16 np0005548915 mystifying_mendel[80878]: {"fsid":"5ecd3f74-dade-5fc4-92ce-8950ae424258","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":92,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-06T09:37:41:285728+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-06T09:39:04.942012+0000","services":{}},"progress_events":{}}
Dec  6 04:39:16 np0005548915 systemd[1]: libpod-5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b.scope: Deactivated successfully.
Dec  6 04:39:16 np0005548915 podman[80862]: 2025-12-06 09:39:16.049844408 +0000 UTC m=+0.625999841 container died 5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b (image=quay.io/ceph/ceph:v19, name=mystifying_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec  6 04:39:16 np0005548915 systemd[1]: var-lib-containers-storage-overlay-0728f570ce10d27dca1ab84ba1441b24576c04ddb35bf0d559f85a6eb37e8825-merged.mount: Deactivated successfully.
Dec  6 04:39:16 np0005548915 podman[80862]: 2025-12-06 09:39:16.229109386 +0000 UTC m=+0.805264739 container remove 5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b (image=quay.io/ceph/ceph:v19, name=mystifying_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  6 04:39:16 np0005548915 systemd[1]: libpod-conmon-5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b.scope: Deactivated successfully.
Dec  6 04:39:16 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:39:16 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:39:16 np0005548915 ceph-mon[74327]: Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:39:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:17 np0005548915 ceph-mgr[74618]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  6 04:39:17 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  6 04:39:17 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:17 np0005548915 ceph-mgr[74618]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  6 04:39:17 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  6 04:39:17 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:17 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 93089259-dd77-4506-8e2a-85cee1c01235 (Updating crash deployment (+1 -> 2))
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:39:17.213+0000 7f8d46bda640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: service_name: mon
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: placement:
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:  hosts:
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:  - compute-0
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:  - compute-1
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:  - compute-2
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:39:17.214+0000 7f8d46bda640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: service_name: mgr
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: placement:
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:  hosts:
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:  - compute-0
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:  - compute-1
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:  - compute-2
Dec  6 04:39:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:39:17 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Dec  6 04:39:17 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  6 04:39:17 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  6 04:39:18 np0005548915 ceph-mon[74327]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  6 04:39:18 np0005548915 ceph-mon[74327]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  6 04:39:18 np0005548915 ceph-mon[74327]: Deploying daemon crash.compute-1 on compute-1
Dec  6 04:39:18 np0005548915 ceph-mon[74327]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec  6 04:39:19 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:20 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 93089259-dd77-4506-8e2a-85cee1c01235 (Updating crash deployment (+1 -> 2))
Dec  6 04:39:20 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 93089259-dd77-4506-8e2a-85cee1c01235 (Updating crash deployment (+1 -> 2)) in 3 seconds
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:39:20 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:39:20 np0005548915 podman[81004]: 2025-12-06 09:39:20.700842804 +0000 UTC m=+0.025788486 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:39:20 np0005548915 podman[81004]: 2025-12-06 09:39:20.872693647 +0000 UTC m=+0.197639319 container create 4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Dec  6 04:39:20 np0005548915 systemd[1]: Started libpod-conmon-4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e.scope.
Dec  6 04:39:21 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:21 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:21 np0005548915 podman[81004]: 2025-12-06 09:39:21.26545406 +0000 UTC m=+0.590399732 container init 4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:39:21 np0005548915 podman[81004]: 2025-12-06 09:39:21.271933858 +0000 UTC m=+0.596879520 container start 4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  6 04:39:21 np0005548915 goofy_payne[81020]: 167 167
Dec  6 04:39:21 np0005548915 systemd[1]: libpod-4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e.scope: Deactivated successfully.
Dec  6 04:39:21 np0005548915 podman[81004]: 2025-12-06 09:39:21.458064334 +0000 UTC m=+0.783010016 container attach 4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  6 04:39:21 np0005548915 podman[81004]: 2025-12-06 09:39:21.458791385 +0000 UTC m=+0.783737037 container died 4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  6 04:39:21 np0005548915 systemd[1]: var-lib-containers-storage-overlay-558fd956a88cf8039dc9b37ca045489257d9c8d683d8505ea06715666f239e6d-merged.mount: Deactivated successfully.
Dec  6 04:39:21 np0005548915 podman[81004]: 2025-12-06 09:39:21.740821489 +0000 UTC m=+1.065767151 container remove 4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Dec  6 04:39:21 np0005548915 systemd[1]: libpod-conmon-4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e.scope: Deactivated successfully.
Dec  6 04:39:21 np0005548915 podman[81045]: 2025-12-06 09:39:21.886024853 +0000 UTC m=+0.027458794 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:39:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "a01bc6a6-e368-4763-a10f-41794e4ef717"} v 0)
Dec  6 04:39:22 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3516162331' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a01bc6a6-e368-4763-a10f-41794e4ef717"}]: dispatch
Dec  6 04:39:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec  6 04:39:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  6 04:39:22 np0005548915 podman[81045]: 2025-12-06 09:39:22.118683822 +0000 UTC m=+0.260117753 container create 7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:39:22 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3516162331' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a01bc6a6-e368-4763-a10f-41794e4ef717"}]': finished
Dec  6 04:39:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec  6 04:39:22 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec  6 04:39:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:22 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:22 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:22 np0005548915 systemd[1]: Started libpod-conmon-7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64.scope.
Dec  6 04:39:22 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:22 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3696a60172011108faeb846dedfdb6ff8070d7c7adbfb8a3e22a0763a589d2e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:22 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3696a60172011108faeb846dedfdb6ff8070d7c7adbfb8a3e22a0763a589d2e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:22 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3696a60172011108faeb846dedfdb6ff8070d7c7adbfb8a3e22a0763a589d2e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:22 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3696a60172011108faeb846dedfdb6ff8070d7c7adbfb8a3e22a0763a589d2e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:22 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3696a60172011108faeb846dedfdb6ff8070d7c7adbfb8a3e22a0763a589d2e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:22 np0005548915 podman[81045]: 2025-12-06 09:39:22.213222723 +0000 UTC m=+0.354656704 container init 7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_antonelli, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:39:22 np0005548915 podman[81045]: 2025-12-06 09:39:22.229048169 +0000 UTC m=+0.370482090 container start 7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:39:22 np0005548915 podman[81045]: 2025-12-06 09:39:22.233421646 +0000 UTC m=+0.374855577 container attach 7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_antonelli, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:39:22 np0005548915 magical_antonelli[81061]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:39:22 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  6 04:39:22 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  6 04:39:22 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 7899c4d8-edb4-4836-b838-c4aa702ad7af
Dec  6 04:39:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  6 04:39:22 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/389473799' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  6 04:39:22 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.101:0/3516162331' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a01bc6a6-e368-4763-a10f-41794e4ef717"}]: dispatch
Dec  6 04:39:22 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.101:0/3516162331' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a01bc6a6-e368-4763-a10f-41794e4ef717"}]': finished
Dec  6 04:39:23 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 2 completed events
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "7899c4d8-edb4-4836-b838-c4aa702ad7af"} v 0)
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/335734850' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7899c4d8-edb4-4836-b838-c4aa702ad7af"}]: dispatch
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  6 04:39:23 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/335734850' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7899c4d8-edb4-4836-b838-c4aa702ad7af"}]': finished
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:39:23 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:23 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  6 04:39:23 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec  6 04:39:23 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec  6 04:39:23 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  6 04:39:23 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:23 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec  6 04:39:23 np0005548915 lvm[81125]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:39:23 np0005548915 lvm[81125]: VG ceph_vg0 finished
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/335734850' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7899c4d8-edb4-4836-b838-c4aa702ad7af"}]: dispatch
Dec  6 04:39:23 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/335734850' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7899c4d8-edb4-4836-b838-c4aa702ad7af"}]': finished
Dec  6 04:39:24 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  6 04:39:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  6 04:39:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2469659317' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  6 04:39:24 np0005548915 magical_antonelli[81061]: stderr: got monmap epoch 1
Dec  6 04:39:24 np0005548915 magical_antonelli[81061]: --> Creating keyring file for osd.1
Dec  6 04:39:24 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec  6 04:39:24 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec  6 04:39:24 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 7899c4d8-edb4-4836-b838-c4aa702ad7af --setuser ceph --setgroup ceph
Dec  6 04:39:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:39:24 np0005548915 ceph-mon[74327]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  6 04:39:25 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:27 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:27 np0005548915 magical_antonelli[81061]: stderr: 2025-12-06T09:39:24.332+0000 7f9db4598740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec  6 04:39:27 np0005548915 magical_antonelli[81061]: stderr: 2025-12-06T09:39:24.593+0000 7f9db4598740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec  6 04:39:27 np0005548915 magical_antonelli[81061]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec  6 04:39:27 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  6 04:39:27 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  6 04:39:28 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:28 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:28 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  6 04:39:28 np0005548915 magical_antonelli[81061]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  6 04:39:28 np0005548915 magical_antonelli[81061]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  6 04:39:28 np0005548915 magical_antonelli[81061]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec  6 04:39:28 np0005548915 systemd[1]: libpod-7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64.scope: Deactivated successfully.
Dec  6 04:39:28 np0005548915 systemd[1]: libpod-7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64.scope: Consumed 2.523s CPU time.
Dec  6 04:39:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec  6 04:39:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  6 04:39:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:39:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:39:28 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Dec  6 04:39:28 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Dec  6 04:39:28 np0005548915 podman[82040]: 2025-12-06 09:39:28.222364184 +0000 UTC m=+0.038012600 container died 7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_antonelli, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:39:28 np0005548915 systemd[1]: var-lib-containers-storage-overlay-3696a60172011108faeb846dedfdb6ff8070d7c7adbfb8a3e22a0763a589d2e4-merged.mount: Deactivated successfully.
Dec  6 04:39:28 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  6 04:39:28 np0005548915 podman[82040]: 2025-12-06 09:39:28.284817247 +0000 UTC m=+0.100465583 container remove 7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:39:28 np0005548915 systemd[1]: libpod-conmon-7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64.scope: Deactivated successfully.
Dec  6 04:39:28 np0005548915 podman[82143]: 2025-12-06 09:39:28.98084071 +0000 UTC m=+0.086298314 container create 09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  6 04:39:29 np0005548915 systemd[1]: Started libpod-conmon-09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae.scope.
Dec  6 04:39:29 np0005548915 podman[82143]: 2025-12-06 09:39:28.940625798 +0000 UTC m=+0.046083412 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:39:29 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:29 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:29 np0005548915 podman[82143]: 2025-12-06 09:39:29.264926524 +0000 UTC m=+0.370384168 container init 09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 04:39:29 np0005548915 podman[82143]: 2025-12-06 09:39:29.273846201 +0000 UTC m=+0.379303765 container start 09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  6 04:39:29 np0005548915 youthful_dirac[82159]: 167 167
Dec  6 04:39:29 np0005548915 systemd[1]: libpod-09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae.scope: Deactivated successfully.
Dec  6 04:39:29 np0005548915 podman[82143]: 2025-12-06 09:39:29.334692519 +0000 UTC m=+0.440150083 container attach 09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  6 04:39:29 np0005548915 ceph-mon[74327]: Deploying daemon osd.0 on compute-1
Dec  6 04:39:29 np0005548915 podman[82143]: 2025-12-06 09:39:29.33678725 +0000 UTC m=+0.442244854 container died 09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 04:39:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:39:29 np0005548915 systemd[1]: var-lib-containers-storage-overlay-2571f5ea6816571cb4ea473a09801e3498e0e177a389c6c14d1c9a07bc7c4dfa-merged.mount: Deactivated successfully.
Dec  6 04:39:29 np0005548915 podman[82143]: 2025-12-06 09:39:29.396123333 +0000 UTC m=+0.501580937 container remove 09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:39:29 np0005548915 systemd[1]: libpod-conmon-09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae.scope: Deactivated successfully.
Dec  6 04:39:29 np0005548915 podman[82183]: 2025-12-06 09:39:29.620344989 +0000 UTC m=+0.085563463 container create 3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 04:39:29 np0005548915 podman[82183]: 2025-12-06 09:39:29.577866282 +0000 UTC m=+0.043084816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:39:29 np0005548915 systemd[1]: Started libpod-conmon-3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7.scope.
Dec  6 04:39:29 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:29 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8e40b0c80d137a7cd1ce73a50c3cc4d840d07401dfab8d1e9983b9b73ea344/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:29 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8e40b0c80d137a7cd1ce73a50c3cc4d840d07401dfab8d1e9983b9b73ea344/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:29 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8e40b0c80d137a7cd1ce73a50c3cc4d840d07401dfab8d1e9983b9b73ea344/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:29 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8e40b0c80d137a7cd1ce73a50c3cc4d840d07401dfab8d1e9983b9b73ea344/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:30 np0005548915 podman[82183]: 2025-12-06 09:39:30.035169209 +0000 UTC m=+0.500387723 container init 3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_keldysh, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:39:30 np0005548915 podman[82183]: 2025-12-06 09:39:30.046940909 +0000 UTC m=+0.512159343 container start 3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_keldysh, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  6 04:39:30 np0005548915 podman[82183]: 2025-12-06 09:39:30.077536273 +0000 UTC m=+0.542754707 container attach 3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_keldysh, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]: {
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:    "1": [
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:        {
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:            "devices": [
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:                "/dev/loop3"
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:            ],
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:            "lv_name": "ceph_lv0",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:            "lv_size": "21470642176",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:            "name": "ceph_lv0",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:            "tags": {
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:                "ceph.cluster_name": "ceph",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:                "ceph.crush_device_class": "",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:                "ceph.encrypted": "0",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:                "ceph.osd_id": "1",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:                "ceph.type": "block",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:                "ceph.vdo": "0",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:                "ceph.with_tpm": "0"
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:            },
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:            "type": "block",
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:            "vg_name": "ceph_vg0"
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:        }
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]:    ]
Dec  6 04:39:30 np0005548915 stoic_keldysh[82199]: }
Dec  6 04:39:30 np0005548915 systemd[1]: libpod-3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7.scope: Deactivated successfully.
Dec  6 04:39:30 np0005548915 podman[82183]: 2025-12-06 09:39:30.360164046 +0000 UTC m=+0.825382470 container died 3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  6 04:39:30 np0005548915 systemd[1]: var-lib-containers-storage-overlay-db8e40b0c80d137a7cd1ce73a50c3cc4d840d07401dfab8d1e9983b9b73ea344-merged.mount: Deactivated successfully.
Dec  6 04:39:30 np0005548915 podman[82183]: 2025-12-06 09:39:30.543951013 +0000 UTC m=+1.009169477 container remove 3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  6 04:39:30 np0005548915 systemd[1]: libpod-conmon-3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7.scope: Deactivated successfully.
Dec  6 04:39:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec  6 04:39:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  6 04:39:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:39:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:39:30 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec  6 04:39:30 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec  6 04:39:31 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:31 np0005548915 podman[82308]: 2025-12-06 09:39:31.24773197 +0000 UTC m=+0.047584446 container create bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:39:31 np0005548915 systemd[1]: Started libpod-conmon-bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b.scope.
Dec  6 04:39:31 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:31 np0005548915 podman[82308]: 2025-12-06 09:39:31.229577015 +0000 UTC m=+0.029429521 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:39:31 np0005548915 podman[82308]: 2025-12-06 09:39:31.329861201 +0000 UTC m=+0.129713677 container init bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 04:39:31 np0005548915 podman[82308]: 2025-12-06 09:39:31.339116138 +0000 UTC m=+0.138968614 container start bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  6 04:39:31 np0005548915 podman[82308]: 2025-12-06 09:39:31.342462046 +0000 UTC m=+0.142314522 container attach bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_carver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:39:31 np0005548915 epic_carver[82324]: 167 167
Dec  6 04:39:31 np0005548915 systemd[1]: libpod-bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b.scope: Deactivated successfully.
Dec  6 04:39:31 np0005548915 podman[82308]: 2025-12-06 09:39:31.345648398 +0000 UTC m=+0.145500874 container died bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_carver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:39:31 np0005548915 systemd[1]: var-lib-containers-storage-overlay-49256ea79608eb0a822be1d90781bc48bb91fdebd83233b5c195ae26006f1b06-merged.mount: Deactivated successfully.
Dec  6 04:39:31 np0005548915 podman[82308]: 2025-12-06 09:39:31.38417119 +0000 UTC m=+0.184023656 container remove bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_carver, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 04:39:31 np0005548915 systemd[1]: libpod-conmon-bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b.scope: Deactivated successfully.
Dec  6 04:39:31 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  6 04:39:31 np0005548915 ceph-mon[74327]: Deploying daemon osd.1 on compute-0
Dec  6 04:39:31 np0005548915 podman[82353]: 2025-12-06 09:39:31.623831312 +0000 UTC m=+0.044788235 container create 77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 04:39:31 np0005548915 systemd[1]: Started libpod-conmon-77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809.scope.
Dec  6 04:39:31 np0005548915 podman[82353]: 2025-12-06 09:39:31.604984057 +0000 UTC m=+0.025940970 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:39:31 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b616f693a4b5fc931d6219f2cd4f977d68fc5df54c8f9a1d4e184d88a5be7a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b616f693a4b5fc931d6219f2cd4f977d68fc5df54c8f9a1d4e184d88a5be7a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b616f693a4b5fc931d6219f2cd4f977d68fc5df54c8f9a1d4e184d88a5be7a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b616f693a4b5fc931d6219f2cd4f977d68fc5df54c8f9a1d4e184d88a5be7a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b616f693a4b5fc931d6219f2cd4f977d68fc5df54c8f9a1d4e184d88a5be7a9/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:31 np0005548915 podman[82353]: 2025-12-06 09:39:31.736022782 +0000 UTC m=+0.156979755 container init 77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:39:31 np0005548915 podman[82353]: 2025-12-06 09:39:31.745897438 +0000 UTC m=+0.166854361 container start 77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:39:31 np0005548915 podman[82353]: 2025-12-06 09:39:31.749783959 +0000 UTC m=+0.170740882 container attach 77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:39:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test[82370]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec  6 04:39:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test[82370]:                            [--no-systemd] [--no-tmpfs]
Dec  6 04:39:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test[82370]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  6 04:39:31 np0005548915 systemd[1]: libpod-77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809.scope: Deactivated successfully.
Dec  6 04:39:31 np0005548915 conmon[82370]: conmon 77430bab5b3dd3fe5413 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809.scope/container/memory.events
Dec  6 04:39:31 np0005548915 podman[82353]: 2025-12-06 09:39:31.933675141 +0000 UTC m=+0.354632044 container died 77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 04:39:31 np0005548915 systemd[1]: var-lib-containers-storage-overlay-2b616f693a4b5fc931d6219f2cd4f977d68fc5df54c8f9a1d4e184d88a5be7a9-merged.mount: Deactivated successfully.
Dec  6 04:39:31 np0005548915 podman[82353]: 2025-12-06 09:39:31.983691694 +0000 UTC m=+0.404648617 container remove 77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:39:32 np0005548915 systemd[1]: libpod-conmon-77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809.scope: Deactivated successfully.
Dec  6 04:39:32 np0005548915 systemd[1]: Reloading.
Dec  6 04:39:32 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:39:32 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:39:32 np0005548915 systemd[1]: Reloading.
Dec  6 04:39:32 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:39:32 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:39:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:39:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:39:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:32 np0005548915 systemd[1]: Starting Ceph osd.1 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:39:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:39:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:39:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:39:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:39:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:39:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:39:33 np0005548915 podman[82529]: 2025-12-06 09:39:33.135656005 +0000 UTC m=+0.061630831 container create 8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  6 04:39:33 np0005548915 podman[82529]: 2025-12-06 09:39:33.107169892 +0000 UTC m=+0.033144738 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:39:33 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba44767d631a4b8fa395fe9ce39363e25139b71e50310bc22614b62b43949884/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba44767d631a4b8fa395fe9ce39363e25139b71e50310bc22614b62b43949884/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba44767d631a4b8fa395fe9ce39363e25139b71e50310bc22614b62b43949884/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba44767d631a4b8fa395fe9ce39363e25139b71e50310bc22614b62b43949884/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba44767d631a4b8fa395fe9ce39363e25139b71e50310bc22614b62b43949884/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:33 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:33 np0005548915 podman[82529]: 2025-12-06 09:39:33.243454348 +0000 UTC m=+0.169429244 container init 8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  6 04:39:33 np0005548915 podman[82529]: 2025-12-06 09:39:33.258606266 +0000 UTC m=+0.184581092 container start 8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:39:33 np0005548915 podman[82529]: 2025-12-06 09:39:33.26289106 +0000 UTC m=+0.188865896 container attach 8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  6 04:39:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  6 04:39:33 np0005548915 bash[82529]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  6 04:39:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  6 04:39:33 np0005548915 bash[82529]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  6 04:39:33 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:33 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:34 np0005548915 lvm[82626]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:39:34 np0005548915 lvm[82626]: VG ceph_vg0 finished
Dec  6 04:39:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  6 04:39:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  6 04:39:34 np0005548915 bash[82529]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  6 04:39:34 np0005548915 bash[82529]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  6 04:39:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  6 04:39:34 np0005548915 bash[82529]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  6 04:39:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  6 04:39:34 np0005548915 bash[82529]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  6 04:39:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  6 04:39:34 np0005548915 bash[82529]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  6 04:39:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:39:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:34 np0005548915 bash[82529]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:34 np0005548915 bash[82529]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:34 np0005548915 bash[82529]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  6 04:39:34 np0005548915 bash[82529]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  6 04:39:34 np0005548915 bash[82529]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  6 04:39:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  6 04:39:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  6 04:39:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  6 04:39:34 np0005548915 systemd[1]: libpod-8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6.scope: Deactivated successfully.
Dec  6 04:39:34 np0005548915 podman[82529]: 2025-12-06 09:39:34.643991667 +0000 UTC m=+1.569966503 container died 8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  6 04:39:34 np0005548915 systemd[1]: libpod-8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6.scope: Consumed 1.626s CPU time.
Dec  6 04:39:34 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ba44767d631a4b8fa395fe9ce39363e25139b71e50310bc22614b62b43949884-merged.mount: Deactivated successfully.
Dec  6 04:39:34 np0005548915 podman[82529]: 2025-12-06 09:39:34.700463718 +0000 UTC m=+1.626438554 container remove 8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:39:34 np0005548915 podman[82784]: 2025-12-06 09:39:34.873671911 +0000 UTC m=+0.021141961 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:39:35 np0005548915 podman[82784]: 2025-12-06 09:39:35.003311834 +0000 UTC m=+0.150781854 container create 1aa09529261e3879ba22be6280df329426f42169aef976622e663976b0bb06ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  6 04:39:35 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2765992040f39f73be307b70866c44d4c3d28535e38e762102b3a85cc1e4d93d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:35 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2765992040f39f73be307b70866c44d4c3d28535e38e762102b3a85cc1e4d93d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:35 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2765992040f39f73be307b70866c44d4c3d28535e38e762102b3a85cc1e4d93d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:35 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2765992040f39f73be307b70866c44d4c3d28535e38e762102b3a85cc1e4d93d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:35 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2765992040f39f73be307b70866c44d4c3d28535e38e762102b3a85cc1e4d93d/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:35 np0005548915 podman[82784]: 2025-12-06 09:39:35.176883907 +0000 UTC m=+0.324353937 container init 1aa09529261e3879ba22be6280df329426f42169aef976622e663976b0bb06ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 04:39:35 np0005548915 podman[82784]: 2025-12-06 09:39:35.186094634 +0000 UTC m=+0.333564634 container start 1aa09529261e3879ba22be6280df329426f42169aef976622e663976b0bb06ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:39:35 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: set uid:gid to 167:167 (ceph:ceph)
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: pidfile_write: ignore empty --pid-file
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) close
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) close
Dec  6 04:39:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:39:35 np0005548915 bash[82784]: 1aa09529261e3879ba22be6280df329426f42169aef976622e663976b0bb06ec
Dec  6 04:39:35 np0005548915 systemd[1]: Started Ceph osd.1 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:39:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:39:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:39:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:39:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 04:39:35 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) close
Dec  6 04:39:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec  6 04:39:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) close
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) close
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  6 04:39:36 np0005548915 podman[82921]: 2025-12-06 09:39:36.299584092 +0000 UTC m=+0.066912033 container create 5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ritchie, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:39:36 np0005548915 systemd[1]: Started libpod-conmon-5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f.scope.
Dec  6 04:39:36 np0005548915 podman[82921]: 2025-12-06 09:39:36.278204304 +0000 UTC m=+0.045532365 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) close
Dec  6 04:39:36 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:36 np0005548915 podman[82921]: 2025-12-06 09:39:36.411838034 +0000 UTC m=+0.179166025 container init 5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  6 04:39:36 np0005548915 podman[82921]: 2025-12-06 09:39:36.419110815 +0000 UTC m=+0.186438766 container start 5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:39:36 np0005548915 podman[82921]: 2025-12-06 09:39:36.422438551 +0000 UTC m=+0.189766502 container attach 5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 04:39:36 np0005548915 nice_ritchie[82939]: 167 167
Dec  6 04:39:36 np0005548915 systemd[1]: libpod-5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f.scope: Deactivated successfully.
Dec  6 04:39:36 np0005548915 podman[82921]: 2025-12-06 09:39:36.426984092 +0000 UTC m=+0.194312043 container died 5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ritchie, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 04:39:36 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6f474c1e99f3993961332dd72f5e1407a84ce9db9e63c5d20a5e6afeed0162f4-merged.mount: Deactivated successfully.
Dec  6 04:39:36 np0005548915 podman[82921]: 2025-12-06 09:39:36.476664006 +0000 UTC m=+0.243991987 container remove 5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:39:36 np0005548915 systemd[1]: libpod-conmon-5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f.scope: Deactivated successfully.
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: load: jerasure load: lrc 
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:39:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:39:36 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:36 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  6 04:39:36 np0005548915 podman[82971]: 2025-12-06 09:39:36.725982767 +0000 UTC m=+0.064284688 container create da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:39:36 np0005548915 systemd[1]: Started libpod-conmon-da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8.scope.
Dec  6 04:39:36 np0005548915 podman[82971]: 2025-12-06 09:39:36.69839936 +0000 UTC m=+0.036701331 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:39:36 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:36 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd982a45f4af60bc4744bc12373d3739158f6c7221ee673fe9ddd47c180cdcfa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:36 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd982a45f4af60bc4744bc12373d3739158f6c7221ee673fe9ddd47c180cdcfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:36 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd982a45f4af60bc4744bc12373d3739158f6c7221ee673fe9ddd47c180cdcfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:36 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd982a45f4af60bc4744bc12373d3739158f6c7221ee673fe9ddd47c180cdcfa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:36 np0005548915 podman[82971]: 2025-12-06 09:39:36.829216409 +0000 UTC m=+0.167518370 container init da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_clarke, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  6 04:39:36 np0005548915 podman[82971]: 2025-12-06 09:39:36.843154711 +0000 UTC m=+0.181456622 container start da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:39:36 np0005548915 podman[82971]: 2025-12-06 09:39:36.847728794 +0000 UTC m=+0.186030775 container attach da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_clarke, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  6 04:39:36 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  6 04:39:37 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs mount
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs mount shared_bdev_used = 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: RocksDB version: 7.9.2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Git sha 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: DB SUMMARY
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: DB Session ID:  CURIEEK1KVXV3KZ3OECU
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: CURRENT file:  CURRENT
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: IDENTITY file:  IDENTITY
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                         Options.error_if_exists: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.create_if_missing: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                         Options.paranoid_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                                     Options.env: 0x55fcde5b1dc0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                                Options.info_log: 0x55fcde5b57a0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_file_opening_threads: 16
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                              Options.statistics: (nil)
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.use_fsync: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.max_log_file_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                         Options.allow_fallocate: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.use_direct_reads: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.create_missing_column_families: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                              Options.db_log_dir: 
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                                 Options.wal_dir: db.wal
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.advise_random_on_open: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.write_buffer_manager: 0x55fcde6e0a00
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                            Options.rate_limiter: (nil)
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.unordered_write: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.row_cache: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                              Options.wal_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.allow_ingest_behind: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.two_write_queues: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.manual_wal_flush: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.wal_compression: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.atomic_flush: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.log_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.allow_data_in_errors: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.db_host_id: __hostname__
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.max_background_jobs: 4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.max_background_compactions: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.max_subcompactions: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.max_open_files: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.bytes_per_sync: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.max_background_flushes: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Compression algorithms supported:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kZSTD supported: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kXpressCompression supported: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kBZip2Compression supported: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kLZ4Compression supported: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kZlibCompression supported: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kSnappyCompression supported: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7da9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7da9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7da9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f7903e0e-8b62-45e4-a979-56d5b4ac2659
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013977293236, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013977293502, "job": 1, "event": "recovery_finished"}
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: freelist init
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: freelist _read_cfg
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs umount
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) close
Dec  6 04:39:37 np0005548915 lvm[83265]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:39:37 np0005548915 lvm[83265]: VG ceph_vg0 finished
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs mount
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluefs mount shared_bdev_used = 4718592
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: RocksDB version: 7.9.2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Git sha 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: DB SUMMARY
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: DB Session ID:  CURIEEK1KVXV3KZ3OECV
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: CURRENT file:  CURRENT
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: IDENTITY file:  IDENTITY
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                         Options.error_if_exists: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.create_if_missing: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                         Options.paranoid_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                                     Options.env: 0x55fcde784310
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                                Options.info_log: 0x55fcde5b5920
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_file_opening_threads: 16
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                              Options.statistics: (nil)
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.use_fsync: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.max_log_file_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                         Options.allow_fallocate: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.use_direct_reads: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.create_missing_column_families: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                              Options.db_log_dir: 
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                                 Options.wal_dir: db.wal
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.advise_random_on_open: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.write_buffer_manager: 0x55fcde6e0a00
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                            Options.rate_limiter: (nil)
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.unordered_write: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.row_cache: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                              Options.wal_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.allow_ingest_behind: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.two_write_queues: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.manual_wal_flush: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.wal_compression: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.atomic_flush: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.log_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.allow_data_in_errors: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.db_host_id: __hostname__
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.max_background_jobs: 4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.max_background_compactions: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.max_subcompactions: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.max_open_files: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.bytes_per_sync: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.max_background_flushes: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Compression algorithms supported:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kZSTD supported: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kXpressCompression supported: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kBZip2Compression supported: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kLZ4Compression supported: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kZlibCompression supported: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: #011kSnappyCompression supported: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 condescending_clarke[82988]: {}
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7da9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7da9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fcdd7da9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f7903e0e-8b62-45e4-a979-56d5b4ac2659
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013977564407, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013977568341, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f7903e0e-8b62-45e4-a979-56d5b4ac2659", "db_session_id": "CURIEEK1KVXV3KZ3OECV", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013977572405, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f7903e0e-8b62-45e4-a979-56d5b4ac2659", "db_session_id": "CURIEEK1KVXV3KZ3OECV", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013977593233, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f7903e0e-8b62-45e4-a979-56d5b4ac2659", "db_session_id": "CURIEEK1KVXV3KZ3OECV", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013977597544, "job": 1, "event": "recovery_finished"}
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:37 np0005548915 systemd[1]: libpod-da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8.scope: Deactivated successfully.
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:39:37 np0005548915 systemd[1]: libpod-da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8.scope: Consumed 1.230s CPU time.
Dec  6 04:39:37 np0005548915 podman[82971]: 2025-12-06 09:39:37.6817506 +0000 UTC m=+1.020052481 container died da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fcde7d6000
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: DB pointer 0x55fcde792000
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec  6 04:39:37 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:37 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 460.80 MB usage: 0
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: _get_class not permitted to load lua
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: _get_class not permitted to load sdk
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: osd.1 0 load_pgs
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: osd.1 0 load_pgs opened 0 pgs
Dec  6 04:39:37 np0005548915 ceph-osd[82803]: osd.1 0 log_to_monitors true
Dec  6 04:39:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1[82799]: 2025-12-06T09:39:37.688+0000 7f0f6fd25740 -1 osd.1 0 log_to_monitors true
Dec  6 04:39:37 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:37 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  6 04:39:37 np0005548915 systemd[1]: var-lib-containers-storage-overlay-cd982a45f4af60bc4744bc12373d3739158f6c7221ee673fe9ddd47c180cdcfa-merged.mount: Deactivated successfully.
Dec  6 04:39:37 np0005548915 podman[82971]: 2025-12-06 09:39:37.729104398 +0000 UTC m=+1.067406279 container remove da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:39:37 np0005548915 systemd[1]: libpod-conmon-da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8.scope: Deactivated successfully.
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:39:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:38 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  6 04:39:38 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  6 04:39:38 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:38 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:39:38 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  6 04:39:38 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:38 np0005548915 podman[83637]: 2025-12-06 09:39:38.808701679 +0000 UTC m=+0.070160908 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:39:38 np0005548915 podman[83637]: 2025-12-06 09:39:38.927991953 +0000 UTC m=+0.189451222 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:39:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:39 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:39 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:39 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e9 e9: 2 total, 0 up, 2 in
Dec  6 04:39:39 np0005548915 ceph-osd[82803]: osd.1 0 done with init, starting boot process
Dec  6 04:39:39 np0005548915 ceph-osd[82803]: osd.1 0 start_boot
Dec  6 04:39:39 np0005548915 ceph-osd[82803]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  6 04:39:39 np0005548915 ceph-osd[82803]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  6 04:39:39 np0005548915 ceph-osd[82803]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  6 04:39:39 np0005548915 ceph-osd[82803]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  6 04:39:39 np0005548915 ceph-osd[82803]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 0 up, 2 in
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:39:39 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:39 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  6 04:39:39 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2585020672; not ready for session (expect reconnect)
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:39:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:39:39 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  6 04:39:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:39:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:40 np0005548915 podman[83897]: 2025-12-06 09:39:40.667904014 +0000 UTC m=+0.065201594 container create abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Dec  6 04:39:40 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec  6 04:39:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:40 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:40 np0005548915 ceph-mon[74327]: from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  6 04:39:40 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:40 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2585020672; not ready for session (expect reconnect)
Dec  6 04:39:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:39:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:39:40 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  6 04:39:40 np0005548915 podman[83897]: 2025-12-06 09:39:40.63037918 +0000 UTC m=+0.027676750 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:39:40 np0005548915 systemd[1]: Started libpod-conmon-abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7.scope.
Dec  6 04:39:40 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:40 np0005548915 podman[83897]: 2025-12-06 09:39:40.786251413 +0000 UTC m=+0.183549003 container init abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  6 04:39:40 np0005548915 podman[83897]: 2025-12-06 09:39:40.795411217 +0000 UTC m=+0.192708817 container start abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  6 04:39:40 np0005548915 agitated_tesla[83913]: 167 167
Dec  6 04:39:40 np0005548915 systemd[1]: libpod-abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7.scope: Deactivated successfully.
Dec  6 04:39:40 np0005548915 podman[83897]: 2025-12-06 09:39:40.822889591 +0000 UTC m=+0.220187161 container attach abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:39:40 np0005548915 podman[83897]: 2025-12-06 09:39:40.823761136 +0000 UTC m=+0.221058696 container died abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 04:39:40 np0005548915 systemd[1]: var-lib-containers-storage-overlay-16b5cebfd76762e367e64435747ac99f35e7965f3bb6168da3c600ad6d0321d8-merged.mount: Deactivated successfully.
Dec  6 04:39:40 np0005548915 podman[83897]: 2025-12-06 09:39:40.965427368 +0000 UTC m=+0.362724948 container remove abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  6 04:39:40 np0005548915 systemd[1]: libpod-conmon-abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7.scope: Deactivated successfully.
Dec  6 04:39:41 np0005548915 podman[83938]: 2025-12-06 09:39:41.157990859 +0000 UTC m=+0.069828008 container create 87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:39:41 np0005548915 podman[83938]: 2025-12-06 09:39:41.114949686 +0000 UTC m=+0.026786885 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:39:41 np0005548915 systemd[1]: Started libpod-conmon-87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430.scope.
Dec  6 04:39:41 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v51: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:41 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d1ab7e710fbc5dfa2f62354b194a3015ba4d9f24b8dd803a373e2d08b0bda9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d1ab7e710fbc5dfa2f62354b194a3015ba4d9f24b8dd803a373e2d08b0bda9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d1ab7e710fbc5dfa2f62354b194a3015ba4d9f24b8dd803a373e2d08b0bda9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d1ab7e710fbc5dfa2f62354b194a3015ba4d9f24b8dd803a373e2d08b0bda9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:41 np0005548915 podman[83938]: 2025-12-06 09:39:41.281321061 +0000 UTC m=+0.193158230 container init 87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  6 04:39:41 np0005548915 podman[83938]: 2025-12-06 09:39:41.288233141 +0000 UTC m=+0.200070290 container start 87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:39:41 np0005548915 podman[83938]: 2025-12-06 09:39:41.309723542 +0000 UTC m=+0.221560681 container attach 87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  6 04:39:41 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec  6 04:39:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:41 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:41 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2585020672; not ready for session (expect reconnect)
Dec  6 04:39:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:39:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:39:41 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]: [
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:    {
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:        "available": false,
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:        "being_replaced": false,
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:        "ceph_device_lvm": false,
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:        "lsm_data": {},
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:        "lvs": [],
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:        "path": "/dev/sr0",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:        "rejected_reasons": [
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "Insufficient space (<5GB)",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "Has a FileSystem"
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:        ],
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:        "sys_api": {
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "actuators": null,
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "device_nodes": [
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:                "sr0"
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            ],
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "devname": "sr0",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "human_readable_size": "482.00 KB",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "id_bus": "ata",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "model": "QEMU DVD-ROM",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "nr_requests": "2",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "parent": "/dev/sr0",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "partitions": {},
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "path": "/dev/sr0",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "removable": "1",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "rev": "2.5+",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "ro": "0",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "rotational": "1",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "sas_address": "",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "sas_device_handle": "",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "scheduler_mode": "mq-deadline",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "sectors": 0,
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "sectorsize": "2048",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "size": 493568.0,
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "support_discard": "2048",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "type": "disk",
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:            "vendor": "QEMU"
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:        }
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]:    }
Dec  6 04:39:42 np0005548915 great_chebyshev[83954]: ]
Dec  6 04:39:42 np0005548915 systemd[1]: libpod-87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430.scope: Deactivated successfully.
Dec  6 04:39:42 np0005548915 podman[83938]: 2025-12-06 09:39:42.048766177 +0000 UTC m=+0.960603346 container died 87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:39:42 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2585020672; not ready for session (expect reconnect)
Dec  6 04:39:42 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec  6 04:39:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:39:42 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:39:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:42 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:42 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  6 04:39:42 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:43 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v52: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:43 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a2d1ab7e710fbc5dfa2f62354b194a3015ba4d9f24b8dd803a373e2d08b0bda9-merged.mount: Deactivated successfully.
Dec  6 04:39:43 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec  6 04:39:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:43 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:43 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2585020672; not ready for session (expect reconnect)
Dec  6 04:39:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:39:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:39:43 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  6 04:39:44 np0005548915 podman[83938]: 2025-12-06 09:39:44.020843782 +0000 UTC m=+2.932680961 container remove 87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:39:44 np0005548915 systemd[1]: libpod-conmon-87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430.scope: Deactivated successfully.
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  6 04:39:44 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Dec  6 04:39:44 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  6 04:39:44 np0005548915 ceph-mgr[74618]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  6 04:39:44 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:39:44 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:44 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:44 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2585020672; not ready for session (expect reconnect)
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:39:44 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:39:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  6 04:39:45 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Dec  6 04:39:45 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:45 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v53: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:39:45 np0005548915 ceph-osd[82803]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 24.703 iops: 6324.094 elapsed_sec: 0.474
Dec  6 04:39:45 np0005548915 ceph-osd[82803]: log_channel(cluster) log [WRN] : OSD bench result of 6324.094408 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  6 04:39:45 np0005548915 ceph-osd[82803]: osd.1 0 waiting for initial osdmap
Dec  6 04:39:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1[82799]: 2025-12-06T09:39:45.621+0000 7f0f6bca8640 -1 osd.1 0 waiting for initial osdmap
Dec  6 04:39:45 np0005548915 ceph-osd[82803]: osd.1 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec  6 04:39:45 np0005548915 ceph-osd[82803]: osd.1 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec  6 04:39:45 np0005548915 ceph-osd[82803]: osd.1 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec  6 04:39:45 np0005548915 ceph-osd[82803]: osd.1 9 check_osdmap_features require_osd_release unknown -> squid
Dec  6 04:39:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1[82799]: 2025-12-06T09:39:45.647+0000 7f0f672d0640 -1 osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  6 04:39:45 np0005548915 ceph-osd[82803]: osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  6 04:39:45 np0005548915 ceph-osd[82803]: osd.1 9 set_numa_affinity not setting numa affinity
Dec  6 04:39:45 np0005548915 ceph-osd[82803]: osd.1 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec  6 04:39:45 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:45 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:45 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2585020672; not ready for session (expect reconnect)
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:39:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:39:45 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  6 04:39:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec  6 04:39:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  6 04:39:46 np0005548915 ceph-mon[74327]: Adjusting osd_memory_target on compute-1 to  5247M
Dec  6 04:39:46 np0005548915 ceph-mon[74327]: OSD bench result of 6324.094408 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  6 04:39:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Dec  6 04:39:46 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672] boot
Dec  6 04:39:46 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Dec  6 04:39:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:39:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:39:46 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:46 np0005548915 ceph-osd[82803]: osd.1 10 state: booting -> active
Dec  6 04:39:46 np0005548915 python3[85071]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:39:46 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec  6 04:39:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:46 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:46 np0005548915 podman[85073]: 2025-12-06 09:39:46.646167385 +0000 UTC m=+0.040264274 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:39:46 np0005548915 podman[85073]: 2025-12-06 09:39:46.840107716 +0000 UTC m=+0.234204605 container create 1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba (image=quay.io/ceph/ceph:v19, name=magical_dewdney, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:39:46 np0005548915 systemd[1]: Started libpod-conmon-1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba.scope.
Dec  6 04:39:46 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad7cf690b5146544cbb329ed3c7cd03a03994f07e7c4eae0d0466b282744d91/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad7cf690b5146544cbb329ed3c7cd03a03994f07e7c4eae0d0466b282744d91/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad7cf690b5146544cbb329ed3c7cd03a03994f07e7c4eae0d0466b282744d91/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:46 np0005548915 podman[85073]: 2025-12-06 09:39:46.954014036 +0000 UTC m=+0.348110985 container init 1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba (image=quay.io/ceph/ceph:v19, name=magical_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  6 04:39:46 np0005548915 podman[85073]: 2025-12-06 09:39:46.966121786 +0000 UTC m=+0.360218675 container start 1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba (image=quay.io/ceph/ceph:v19, name=magical_dewdney, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 04:39:46 np0005548915 podman[85073]: 2025-12-06 09:39:46.969799912 +0000 UTC m=+0.363896821 container attach 1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba (image=quay.io/ceph/ceph:v19, name=magical_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  6 04:39:47 np0005548915 ceph-mgr[74618]: [devicehealth INFO root] creating mgr pool
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  6 04:39:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672] boot
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  6 04:39:47 np0005548915 ceph-osd[82803]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  6 04:39:47 np0005548915 ceph-osd[82803]: osd.1 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec  6 04:39:47 np0005548915 ceph-osd[82803]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:47 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [1] r=0 lpr=11 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:39:47 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3530193031' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  6 04:39:47 np0005548915 magical_dewdney[85090]: 
Dec  6 04:39:47 np0005548915 magical_dewdney[85090]: {"fsid":"5ecd3f74-dade-5fc4-92ce-8950ae424258","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":123,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":11,"num_osds":2,"num_up_osds":1,"osd_up_since":1765013986,"num_in_osds":2,"osd_in_since":1765013963,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":446988288,"bytes_avail":21023653888,"bytes_total":21470642176},"fsmap":{"epoch":1,"btime":"2025-12-06T09:37:41:285728+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-06T09:39:04.942012+0000","services":{}},"progress_events":{}}
Dec  6 04:39:47 np0005548915 systemd[1]: libpod-1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba.scope: Deactivated successfully.
Dec  6 04:39:47 np0005548915 podman[85073]: 2025-12-06 09:39:47.482814788 +0000 UTC m=+0.876911647 container died 1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba (image=quay.io/ceph/ceph:v19, name=magical_dewdney, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:39:47 np0005548915 systemd[1]: var-lib-containers-storage-overlay-9ad7cf690b5146544cbb329ed3c7cd03a03994f07e7c4eae0d0466b282744d91-merged.mount: Deactivated successfully.
Dec  6 04:39:47 np0005548915 podman[85073]: 2025-12-06 09:39:47.519198929 +0000 UTC m=+0.913295788 container remove 1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba (image=quay.io/ceph/ceph:v19, name=magical_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  6 04:39:47 np0005548915 systemd[1]: libpod-conmon-1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba.scope: Deactivated successfully.
Dec  6 04:39:47 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:47 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  6 04:39:48 np0005548915 python3[85153]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:39:48 np0005548915 podman[85154]: 2025-12-06 09:39:48.056326442 +0000 UTC m=+0.024401966 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:39:48 np0005548915 podman[85154]: 2025-12-06 09:39:48.178979305 +0000 UTC m=+0.147054759 container create 9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267 (image=quay.io/ceph/ceph:v19, name=unruffled_brown, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec  6 04:39:48 np0005548915 systemd[1]: Started libpod-conmon-9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267.scope.
Dec  6 04:39:48 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf52e731fd2c02df7096c7c1441b148e8e70645b7c0b33b2bc32105c5a4827ee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf52e731fd2c02df7096c7c1441b148e8e70645b7c0b33b2bc32105c5a4827ee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:48 np0005548915 podman[85154]: 2025-12-06 09:39:48.264580737 +0000 UTC m=+0.232656211 container init 9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267 (image=quay.io/ceph/ceph:v19, name=unruffled_brown, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:39:48 np0005548915 podman[85154]: 2025-12-06 09:39:48.271929889 +0000 UTC m=+0.240005333 container start 9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267 (image=quay.io/ceph/ceph:v19, name=unruffled_brown, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 04:39:48 np0005548915 podman[85154]: 2025-12-06 09:39:48.277269233 +0000 UTC m=+0.245344697 container attach 9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267 (image=quay.io/ceph/ceph:v19, name=unruffled_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283] boot
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: OSD bench result of 5666.545158 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  6 04:39:48 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [1] r=0 lpr=11 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:39:48 np0005548915 ceph-mgr[74618]: [devicehealth INFO root] creating main.db for devicehealth
Dec  6 04:39:48 np0005548915 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  6 04:39:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1916681859' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  6 04:39:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  6 04:39:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec  6 04:39:49 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.qhdjwa(active, since 106s)
Dec  6 04:39:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1916681859' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  6 04:39:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Dec  6 04:39:49 np0005548915 unruffled_brown[85169]: pool 'vms' created
Dec  6 04:39:49 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  6 04:39:49 np0005548915 ceph-mon[74327]: osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283] boot
Dec  6 04:39:49 np0005548915 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  6 04:39:49 np0005548915 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  6 04:39:49 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1916681859' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  6 04:39:49 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Dec  6 04:39:49 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 13 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:39:49 np0005548915 systemd[1]: libpod-9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267.scope: Deactivated successfully.
Dec  6 04:39:49 np0005548915 podman[85154]: 2025-12-06 09:39:49.365707268 +0000 UTC m=+1.333782712 container died 9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267 (image=quay.io/ceph/ceph:v19, name=unruffled_brown, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:39:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:39:49 np0005548915 systemd[1]: var-lib-containers-storage-overlay-bf52e731fd2c02df7096c7c1441b148e8e70645b7c0b33b2bc32105c5a4827ee-merged.mount: Deactivated successfully.
Dec  6 04:39:49 np0005548915 podman[85154]: 2025-12-06 09:39:49.416069323 +0000 UTC m=+1.384144777 container remove 9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267 (image=quay.io/ceph/ceph:v19, name=unruffled_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  6 04:39:49 np0005548915 systemd[1]: libpod-conmon-9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267.scope: Deactivated successfully.
Dec  6 04:39:49 np0005548915 python3[85248]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:39:49 np0005548915 podman[85249]: 2025-12-06 09:39:49.800927298 +0000 UTC m=+0.027511975 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:39:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec  6 04:39:50 np0005548915 podman[85249]: 2025-12-06 09:39:50.499832614 +0000 UTC m=+0.726417231 container create 622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9 (image=quay.io/ceph/ceph:v19, name=funny_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:39:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Dec  6 04:39:50 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Dec  6 04:39:50 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1916681859' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  6 04:39:50 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 14 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:39:50 np0005548915 systemd[1]: Started libpod-conmon-622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9.scope.
Dec  6 04:39:50 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97cea853dbf9130963304a957dfed8f1d435904448ce3de2a0bd4925c625425/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97cea853dbf9130963304a957dfed8f1d435904448ce3de2a0bd4925c625425/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:50 np0005548915 podman[85249]: 2025-12-06 09:39:50.608018789 +0000 UTC m=+0.834603416 container init 622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9 (image=quay.io/ceph/ceph:v19, name=funny_dewdney, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:39:50 np0005548915 podman[85249]: 2025-12-06 09:39:50.621021864 +0000 UTC m=+0.847606481 container start 622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9 (image=quay.io/ceph/ceph:v19, name=funny_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:39:50 np0005548915 podman[85249]: 2025-12-06 09:39:50.640437274 +0000 UTC m=+0.867021901 container attach 622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9 (image=quay.io/ceph/ceph:v19, name=funny_dewdney, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:39:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  6 04:39:50 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/652672954' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  6 04:39:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v61: 2 pgs: 2 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  6 04:39:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec  6 04:39:51 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  6 04:39:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/652672954' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  6 04:39:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Dec  6 04:39:52 np0005548915 funny_dewdney[85264]: pool 'volumes' created
Dec  6 04:39:52 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Dec  6 04:39:52 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/652672954' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  6 04:39:52 np0005548915 systemd[1]: libpod-622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9.scope: Deactivated successfully.
Dec  6 04:39:52 np0005548915 podman[85249]: 2025-12-06 09:39:52.135231856 +0000 UTC m=+2.361816463 container died 622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9 (image=quay.io/ceph/ceph:v19, name=funny_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 04:39:52 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c97cea853dbf9130963304a957dfed8f1d435904448ce3de2a0bd4925c625425-merged.mount: Deactivated successfully.
Dec  6 04:39:52 np0005548915 podman[85249]: 2025-12-06 09:39:52.1793558 +0000 UTC m=+2.405940377 container remove 622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9 (image=quay.io/ceph/ceph:v19, name=funny_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:39:52 np0005548915 systemd[1]: libpod-conmon-622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9.scope: Deactivated successfully.
Dec  6 04:39:52 np0005548915 python3[85328]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:39:52 np0005548915 podman[85329]: 2025-12-06 09:39:52.719556902 +0000 UTC m=+0.090557817 container create e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277 (image=quay.io/ceph/ceph:v19, name=musing_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  6 04:39:52 np0005548915 podman[85329]: 2025-12-06 09:39:52.657655394 +0000 UTC m=+0.028656399 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:39:52 np0005548915 systemd[1]: Started libpod-conmon-e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277.scope.
Dec  6 04:39:52 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:52 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21674f9ed1e08dddc4b4cec40f9e266cb78b7dc81c3f358cdc1a5d6b4705a78b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:52 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21674f9ed1e08dddc4b4cec40f9e266cb78b7dc81c3f358cdc1a5d6b4705a78b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:52 np0005548915 podman[85329]: 2025-12-06 09:39:52.808510571 +0000 UTC m=+0.179511526 container init e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277 (image=quay.io/ceph/ceph:v19, name=musing_shtern, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:39:52 np0005548915 podman[85329]: 2025-12-06 09:39:52.819291302 +0000 UTC m=+0.190292207 container start e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277 (image=quay.io/ceph/ceph:v19, name=musing_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:39:52 np0005548915 podman[85329]: 2025-12-06 09:39:52.82370981 +0000 UTC m=+0.194710755 container attach e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277 (image=quay.io/ceph/ceph:v19, name=musing_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  6 04:39:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec  6 04:39:53 np0005548915 ceph-mon[74327]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  6 04:39:53 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/652672954' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  6 04:39:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Dec  6 04:39:53 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Dec  6 04:39:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v64: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:39:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  6 04:39:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2220711561' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  6 04:39:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec  6 04:39:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2220711561' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  6 04:39:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Dec  6 04:39:54 np0005548915 musing_shtern[85344]: pool 'backups' created
Dec  6 04:39:54 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Dec  6 04:39:54 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2220711561' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  6 04:39:54 np0005548915 systemd[1]: libpod-e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277.scope: Deactivated successfully.
Dec  6 04:39:54 np0005548915 podman[85329]: 2025-12-06 09:39:54.224049285 +0000 UTC m=+1.595050210 container died e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277 (image=quay.io/ceph/ceph:v19, name=musing_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  6 04:39:54 np0005548915 systemd[1]: var-lib-containers-storage-overlay-21674f9ed1e08dddc4b4cec40f9e266cb78b7dc81c3f358cdc1a5d6b4705a78b-merged.mount: Deactivated successfully.
Dec  6 04:39:54 np0005548915 podman[85329]: 2025-12-06 09:39:54.267598348 +0000 UTC m=+1.638599253 container remove e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277 (image=quay.io/ceph/ceph:v19, name=musing_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 04:39:54 np0005548915 systemd[1]: libpod-conmon-e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277.scope: Deactivated successfully.
Dec  6 04:39:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:39:54 np0005548915 python3[85406]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:39:54 np0005548915 podman[85407]: 2025-12-06 09:39:54.67103381 +0000 UTC m=+0.039507302 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:39:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec  6 04:39:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v66: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:39:57 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v67: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:39:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Dec  6 04:39:58 np0005548915 podman[85407]: 2025-12-06 09:39:58.122847445 +0000 UTC m=+3.491320927 container create 5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850 (image=quay.io/ceph/ceph:v19, name=optimistic_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  6 04:39:58 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Dec  6 04:39:58 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  6 04:39:58 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2220711561' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  6 04:39:58 np0005548915 systemd[1]: Started libpod-conmon-5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850.scope.
Dec  6 04:39:58 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:39:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e602cd08424c74e1ae56a3fd9acea742ee3b8b4620419111bb4cfdf2cbf2e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e602cd08424c74e1ae56a3fd9acea742ee3b8b4620419111bb4cfdf2cbf2e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:39:58 np0005548915 podman[85407]: 2025-12-06 09:39:58.292068362 +0000 UTC m=+3.660541814 container init 5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850 (image=quay.io/ceph/ceph:v19, name=optimistic_gagarin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 04:39:58 np0005548915 podman[85407]: 2025-12-06 09:39:58.298903034 +0000 UTC m=+3.667376466 container start 5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850 (image=quay.io/ceph/ceph:v19, name=optimistic_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:39:58 np0005548915 podman[85407]: 2025-12-06 09:39:58.302545061 +0000 UTC m=+3.671018523 container attach 5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850 (image=quay.io/ceph/ceph:v19, name=optimistic_gagarin, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:39:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  6 04:39:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2516193572' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  6 04:39:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec  6 04:39:59 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v69: 4 pgs: 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:39:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2516193572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  6 04:39:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Dec  6 04:39:59 np0005548915 optimistic_gagarin[85422]: pool 'images' created
Dec  6 04:39:59 np0005548915 podman[85407]: 2025-12-06 09:39:59.996840834 +0000 UTC m=+5.365314276 container died 5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850 (image=quay.io/ceph/ceph:v19, name=optimistic_gagarin, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 04:39:59 np0005548915 systemd[1]: libpod-5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850.scope: Deactivated successfully.
Dec  6 04:40:00 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Dec  6 04:40:00 np0005548915 ceph-mon[74327]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  6 04:40:00 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2516193572' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  6 04:40:00 np0005548915 systemd[1]: var-lib-containers-storage-overlay-28e602cd08424c74e1ae56a3fd9acea742ee3b8b4620419111bb4cfdf2cbf2e9-merged.mount: Deactivated successfully.
Dec  6 04:40:01 np0005548915 podman[85407]: 2025-12-06 09:40:01.083239653 +0000 UTC m=+6.451713125 container remove 5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850 (image=quay.io/ceph/ceph:v19, name=optimistic_gagarin, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:40:01 np0005548915 systemd[1]: libpod-conmon-5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850.scope: Deactivated successfully.
Dec  6 04:40:01 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v71: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:01 np0005548915 python3[85486]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:01 np0005548915 podman[85487]: 2025-12-06 09:40:01.420344584 +0000 UTC m=+0.044047859 container create d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2 (image=quay.io/ceph/ceph:v19, name=gallant_lumiere, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:40:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec  6 04:40:01 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2516193572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  6 04:40:01 np0005548915 systemd[1]: Started libpod-conmon-d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2.scope.
Dec  6 04:40:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Dec  6 04:40:01 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Dec  6 04:40:01 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:01 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db08c98fd338c31f4167bef0bb546b889aef87a0282036d0f709a743d23c4e57/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:01 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db08c98fd338c31f4167bef0bb546b889aef87a0282036d0f709a743d23c4e57/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:01 np0005548915 podman[85487]: 2025-12-06 09:40:01.401677169 +0000 UTC m=+0.025380464 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:01 np0005548915 podman[85487]: 2025-12-06 09:40:01.500688549 +0000 UTC m=+0.124391854 container init d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2 (image=quay.io/ceph/ceph:v19, name=gallant_lumiere, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:40:01 np0005548915 podman[85487]: 2025-12-06 09:40:01.507648095 +0000 UTC m=+0.131351370 container start d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2 (image=quay.io/ceph/ceph:v19, name=gallant_lumiere, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:01 np0005548915 podman[85487]: 2025-12-06 09:40:01.517731762 +0000 UTC m=+0.141435037 container attach d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2 (image=quay.io/ceph/ceph:v19, name=gallant_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:40:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  6 04:40:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1245180232' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  6 04:40:02 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:40:02
Dec  6 04:40:02 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:40:02 np0005548915 ceph-mgr[74618]: [balancer INFO root] Some PGs (0.200000) are unknown; try again later
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:40:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec  6 04:40:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec  6 04:40:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:40:03 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1245180232' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  6 04:40:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1245180232' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  6 04:40:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Dec  6 04:40:03 np0005548915 gallant_lumiere[85502]: pool 'cephfs.cephfs.meta' created
Dec  6 04:40:03 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Dec  6 04:40:03 np0005548915 systemd[1]: libpod-d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2.scope: Deactivated successfully.
Dec  6 04:40:03 np0005548915 podman[85529]: 2025-12-06 09:40:03.163043815 +0000 UTC m=+0.027614216 container died d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2 (image=quay.io/ceph/ceph:v19, name=gallant_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:40:03 np0005548915 systemd[1]: var-lib-containers-storage-overlay-db08c98fd338c31f4167bef0bb546b889aef87a0282036d0f709a743d23c4e57-merged.mount: Deactivated successfully.
Dec  6 04:40:03 np0005548915 podman[85529]: 2025-12-06 09:40:03.200854472 +0000 UTC m=+0.065424783 container remove d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2 (image=quay.io/ceph/ceph:v19, name=gallant_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 04:40:03 np0005548915 systemd[1]: libpod-conmon-d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2.scope: Deactivated successfully.
Dec  6 04:40:03 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v74: 6 pgs: 2 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:03 np0005548915 python3[85570]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:03 np0005548915 podman[85571]: 2025-12-06 09:40:03.568066569 +0000 UTC m=+0.066226469 container create f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8 (image=quay.io/ceph/ceph:v19, name=pedantic_volhard, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:40:03 np0005548915 systemd[1]: Started libpod-conmon-f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8.scope.
Dec  6 04:40:03 np0005548915 podman[85571]: 2025-12-06 09:40:03.538655015 +0000 UTC m=+0.036814985 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:03 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:03 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a51b4f22a977b2a194508918095350f884936d45f15c1fd20abaf73ff4efe528/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:03 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a51b4f22a977b2a194508918095350f884936d45f15c1fd20abaf73ff4efe528/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:03 np0005548915 podman[85571]: 2025-12-06 09:40:03.661703015 +0000 UTC m=+0.159862965 container init f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8 (image=quay.io/ceph/ceph:v19, name=pedantic_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  6 04:40:03 np0005548915 podman[85571]: 2025-12-06 09:40:03.670615555 +0000 UTC m=+0.168775485 container start f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8 (image=quay.io/ceph/ceph:v19, name=pedantic_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  6 04:40:03 np0005548915 podman[85571]: 2025-12-06 09:40:03.675969558 +0000 UTC m=+0.174129458 container attach f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8 (image=quay.io/ceph/ceph:v19, name=pedantic_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1245180232' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/273132572' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/273132572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Dec  6 04:40:04 np0005548915 pedantic_volhard[85587]: pool 'cephfs.cephfs.data' created
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Dec  6 04:40:04 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 7a9f3ae5-48bb-431a-9693-7f43cfabedf9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:40:04 np0005548915 systemd[1]: libpod-f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8.scope: Deactivated successfully.
Dec  6 04:40:04 np0005548915 podman[85571]: 2025-12-06 09:40:04.123150529 +0000 UTC m=+0.621310429 container died f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8 (image=quay.io/ceph/ceph:v19, name=pedantic_volhard, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  6 04:40:04 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a51b4f22a977b2a194508918095350f884936d45f15c1fd20abaf73ff4efe528-merged.mount: Deactivated successfully.
Dec  6 04:40:04 np0005548915 podman[85571]: 2025-12-06 09:40:04.187386042 +0000 UTC m=+0.685545942 container remove f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8 (image=quay.io/ceph/ceph:v19, name=pedantic_volhard, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:40:04 np0005548915 systemd[1]: libpod-conmon-f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8.scope: Deactivated successfully.
Dec  6 04:40:04 np0005548915 python3[85651]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:40:04 np0005548915 podman[85652]: 2025-12-06 09:40:04.559961444 +0000 UTC m=+0.056382169 container create 3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269 (image=quay.io/ceph/ceph:v19, name=thirsty_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:40:04 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  6 04:40:04 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  6 04:40:04 np0005548915 systemd[1]: Started libpod-conmon-3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269.scope.
Dec  6 04:40:04 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50eaec8881d8b4c56fd23c269018e6b3f346348fcb0fb0d357fcdc9cb671a19f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50eaec8881d8b4c56fd23c269018e6b3f346348fcb0fb0d357fcdc9cb671a19f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:04 np0005548915 podman[85652]: 2025-12-06 09:40:04.542274591 +0000 UTC m=+0.038695336 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:04 np0005548915 podman[85652]: 2025-12-06 09:40:04.640551927 +0000 UTC m=+0.136972682 container init 3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269 (image=quay.io/ceph/ceph:v19, name=thirsty_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:40:04 np0005548915 podman[85652]: 2025-12-06 09:40:04.648867357 +0000 UTC m=+0.145288082 container start 3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269 (image=quay.io/ceph/ceph:v19, name=thirsty_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  6 04:40:04 np0005548915 podman[85652]: 2025-12-06 09:40:04.65204142 +0000 UTC m=+0.148462145 container attach 3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269 (image=quay.io/ceph/ceph:v19, name=thirsty_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  6 04:40:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:40:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3975532761' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/273132572' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/273132572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.conf
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/3975532761' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  6 04:40:05 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:40:05 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3975532761' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Dec  6 04:40:05 np0005548915 thirsty_wright[85668]: enabled application 'rbd' on pool 'vms'
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Dec  6 04:40:05 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 77f96f84-04b9-4f8b-a569-a0f337be7483 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:40:05 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:05 np0005548915 systemd[1]: libpod-3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269.scope: Deactivated successfully.
Dec  6 04:40:05 np0005548915 podman[85652]: 2025-12-06 09:40:05.132894323 +0000 UTC m=+0.629315048 container died 3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269 (image=quay.io/ceph/ceph:v19, name=thirsty_wright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  6 04:40:05 np0005548915 systemd[1]: var-lib-containers-storage-overlay-50eaec8881d8b4c56fd23c269018e6b3f346348fcb0fb0d357fcdc9cb671a19f-merged.mount: Deactivated successfully.
Dec  6 04:40:05 np0005548915 podman[85652]: 2025-12-06 09:40:05.170859864 +0000 UTC m=+0.667280609 container remove 3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269 (image=quay.io/ceph/ceph:v19, name=thirsty_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:05 np0005548915 systemd[1]: libpod-conmon-3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269.scope: Deactivated successfully.
Dec  6 04:40:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec  6 04:40:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:05 np0005548915 python3[85729]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:05 np0005548915 podman[85730]: 2025-12-06 09:40:05.554815844 +0000 UTC m=+0.050814098 container create 04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb (image=quay.io/ceph/ceph:v19, name=silly_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  6 04:40:05 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:40:05 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:40:05 np0005548915 systemd[1]: Started libpod-conmon-04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb.scope.
Dec  6 04:40:05 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce4206e6813446cd2c8681b83047b28f40eeed2615c63236032567971dccf90/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce4206e6813446cd2c8681b83047b28f40eeed2615c63236032567971dccf90/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:05 np0005548915 podman[85730]: 2025-12-06 09:40:05.52909269 +0000 UTC m=+0.025090954 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:05 np0005548915 podman[85730]: 2025-12-06 09:40:05.847335261 +0000 UTC m=+0.343333595 container init 04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb (image=quay.io/ceph/ceph:v19, name=silly_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  6 04:40:05 np0005548915 podman[85730]: 2025-12-06 09:40:05.858795292 +0000 UTC m=+0.354793526 container start 04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb (image=quay.io/ceph/ceph:v19, name=silly_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:05 np0005548915 podman[85730]: 2025-12-06 09:40:05.86336569 +0000 UTC m=+0.359363924 container attach 04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb (image=quay.io/ceph/ceph:v19, name=silly_mclaren, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:40:06 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:40:06 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Dec  6 04:40:06 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 24 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=24 pruub=8.415267944s) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active pruub 36.849887848s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:06 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 24 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=24 pruub=8.415267944s) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown pruub 36.849887848s@ mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/3975532761' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:40:06 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 1f91fb4a-84b8-4b07-86b9-ad6cf512b1c6 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2735601092' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v79: 69 pgs: 64 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:06 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev f9d57ea0-0593-4d0f-83be-a56b20ce3d10 (Updating mon deployment (+2 -> 3))
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:40:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:40:06 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Dec  6 04:40:06 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2735601092' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: Deploying daemon mon.compute-2 on compute-2
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2735601092' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Dec  6 04:40:07 np0005548915 silly_mclaren[85743]: enabled application 'rbd' on pool 'volumes'
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Dec  6 04:40:07 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 83190a00-1b74-48c3-91b4-335b8d313526 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  6 04:40:07 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 7a9f3ae5-48bb-431a-9693-7f43cfabedf9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  6 04:40:07 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 7a9f3ae5-48bb-431a-9693-7f43cfabedf9 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 3 seconds
Dec  6 04:40:07 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 77f96f84-04b9-4f8b-a569-a0f337be7483 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  6 04:40:07 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 77f96f84-04b9-4f8b-a569-a0f337be7483 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 2 seconds
Dec  6 04:40:07 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 1f91fb4a-84b8-4b07-86b9-ad6cf512b1c6 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  6 04:40:07 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 1f91fb4a-84b8-4b07-86b9-ad6cf512b1c6 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 1 seconds
Dec  6 04:40:07 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 83190a00-1b74-48c3-91b4-335b8d313526 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  6 04:40:07 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 83190a00-1b74-48c3-91b4-335b8d313526 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 0 seconds
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1f( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1b( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1c( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1e( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1d( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.8( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.a( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.9( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.7( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.6( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.4( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.2( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.5( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.3( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.c( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.b( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.d( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.e( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.f( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.10( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.11( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.12( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.13( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.14( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.15( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.16( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.17( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.18( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.19( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1a( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.a( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.8( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.9( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1e( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.2( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.6( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.0( empty local-lis/les=24/25 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.5( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.7( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.3( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.4( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.e( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.10( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.11( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.12( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.14( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.13( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.15( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.16( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1a( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.17( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.19( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.18( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:07 np0005548915 systemd[1]: libpod-04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb.scope: Deactivated successfully.
Dec  6 04:40:07 np0005548915 podman[85730]: 2025-12-06 09:40:07.167464679 +0000 UTC m=+1.663462933 container died 04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb (image=quay.io/ceph/ceph:v19, name=silly_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:40:07 np0005548915 systemd[1]: var-lib-containers-storage-overlay-2ce4206e6813446cd2c8681b83047b28f40eeed2615c63236032567971dccf90-merged.mount: Deactivated successfully.
Dec  6 04:40:07 np0005548915 podman[85730]: 2025-12-06 09:40:07.204088417 +0000 UTC m=+1.700086661 container remove 04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb (image=quay.io/ceph/ceph:v19, name=silly_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Dec  6 04:40:07 np0005548915 systemd[1]: libpod-conmon-04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb.scope: Deactivated successfully.
Dec  6 04:40:07 np0005548915 python3[85805]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:07 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec  6 04:40:07 np0005548915 podman[85806]: 2025-12-06 09:40:07.597359349 +0000 UTC m=+0.062933631 container create b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f (image=quay.io/ceph/ceph:v19, name=bold_cori, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  6 04:40:07 np0005548915 systemd[1]: Started libpod-conmon-b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f.scope.
Dec  6 04:40:07 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3150ff693cb8633c70c6e4a92bd6b972a44ab3ac5ebf4d4a0c0527ade6af70a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3150ff693cb8633c70c6e4a92bd6b972a44ab3ac5ebf4d4a0c0527ade6af70a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:07 np0005548915 podman[85806]: 2025-12-06 09:40:07.579611434 +0000 UTC m=+0.045185756 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:07 np0005548915 podman[85806]: 2025-12-06 09:40:07.67724104 +0000 UTC m=+0.142815412 container init b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f (image=quay.io/ceph/ceph:v19, name=bold_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:07 np0005548915 podman[85806]: 2025-12-06 09:40:07.6858707 +0000 UTC m=+0.151444982 container start b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f (image=quay.io/ceph/ceph:v19, name=bold_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:40:07 np0005548915 podman[85806]: 2025-12-06 09:40:07.68991142 +0000 UTC m=+0.155485802 container attach b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f (image=quay.io/ceph/ceph:v19, name=bold_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec  6 04:40:07 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec  6 04:40:08 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 6 completed events
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:08 np0005548915 ceph-mgr[74618]: [progress WARNING root] Starting Global Recovery Event,95 pgs not in active + clean state
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/250124401' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/250124401' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Dec  6 04:40:08 np0005548915 bold_cori[85821]: enabled application 'rbd' on pool 'backups'
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2735601092' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/250124401' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  6 04:40:08 np0005548915 systemd[1]: libpod-b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f.scope: Deactivated successfully.
Dec  6 04:40:08 np0005548915 podman[85806]: 2025-12-06 09:40:08.168680016 +0000 UTC m=+0.634254308 container died b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f (image=quay.io/ceph/ceph:v19, name=bold_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:40:08 np0005548915 systemd[1]: var-lib-containers-storage-overlay-3150ff693cb8633c70c6e4a92bd6b972a44ab3ac5ebf4d4a0c0527ade6af70a7-merged.mount: Deactivated successfully.
Dec  6 04:40:08 np0005548915 podman[85806]: 2025-12-06 09:40:08.206352488 +0000 UTC m=+0.671926770 container remove b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f (image=quay.io/ceph/ceph:v19, name=bold_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  6 04:40:08 np0005548915 systemd[1]: libpod-conmon-b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f.scope: Deactivated successfully.
Dec  6 04:40:08 np0005548915 python3[85884]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v82: 100 pgs: 1 peering, 94 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec  6 04:40:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:08 np0005548915 podman[85885]: 2025-12-06 09:40:08.557391321 +0000 UTC m=+0.021963834 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:08 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec  6 04:40:08 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec  6 04:40:09 np0005548915 podman[85885]: 2025-12-06 09:40:09.082876212 +0000 UTC m=+0.547448735 container create 0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e (image=quay.io/ceph/ceph:v19, name=pensive_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 04:40:09 np0005548915 systemd[75653]: Starting Mark boot as successful...
Dec  6 04:40:09 np0005548915 systemd[75653]: Finished Mark boot as successful.
Dec  6 04:40:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec  6 04:40:09 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  6 04:40:09 np0005548915 systemd[1]: Started libpod-conmon-0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e.scope.
Dec  6 04:40:09 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:09 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ccd2f50842bd7928969aa56bc15b6dc8f9512af1bb94519f4816e3b58c66e1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:09 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ccd2f50842bd7928969aa56bc15b6dc8f9512af1bb94519f4816e3b58c66e1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:40:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Dec  6 04:40:09 np0005548915 podman[85885]: 2025-12-06 09:40:09.83714012 +0000 UTC m=+1.301712663 container init 0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e (image=quay.io/ceph/ceph:v19, name=pensive_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  6 04:40:09 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Dec  6 04:40:09 np0005548915 podman[85885]: 2025-12-06 09:40:09.843807116 +0000 UTC m=+1.308379619 container start 0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e (image=quay.io/ceph/ceph:v19, name=pensive_payne, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  6 04:40:09 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec  6 04:40:09 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec  6 04:40:09 np0005548915 podman[85885]: 2025-12-06 09:40:09.987458104 +0000 UTC m=+1.452030597 container attach 0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e (image=quay.io/ceph/ceph:v19, name=pensive_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/250124401' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3524701111' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:40:10 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Dec  6 04:40:10 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Dec  6 04:40:10 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  6 04:40:10 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3524701111' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec  6 04:40:10 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Dec  6 04:40:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 04:40:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v84: 131 pgs: 1 peering, 62 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:10 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec  6 04:40:11 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec  6 04:40:11 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec  6 04:40:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  6 04:40:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  6 04:40:11 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  6 04:40:11 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec  6 04:40:11 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec  6 04:40:12 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec  6 04:40:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  6 04:40:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  6 04:40:12 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  6 04:40:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v85: 131 pgs: 31 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:40:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  6 04:40:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  6 04:40:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  6 04:40:12 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec  6 04:40:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:12 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  6 04:40:13 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec  6 04:40:13 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec  6 04:40:13 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec  6 04:40:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  6 04:40:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  6 04:40:13 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  6 04:40:13 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec  6 04:40:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:13 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  6 04:40:14 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Dec  6 04:40:14 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Dec  6 04:40:14 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec  6 04:40:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  6 04:40:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  6 04:40:14 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  6 04:40:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v86: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  6 04:40:14 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec  6 04:40:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:14 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  6 04:40:15 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec  6 04:40:15 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec  6 04:40:15 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec  6 04:40:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  6 04:40:15 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  6 04:40:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  6 04:40:15 np0005548915 ceph-mon[74327]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Dec  6 04:40:15 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec  6 04:40:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 04:40:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:15 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  6 04:40:16 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec  6 04:40:16 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec  6 04:40:16 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec  6 04:40:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  6 04:40:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  6 04:40:16 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  6 04:40:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v87: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  6 04:40:16 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec  6 04:40:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:16 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  6 04:40:17 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec  6 04:40:17 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec  6 04:40:17 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec  6 04:40:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  6 04:40:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  6 04:40:17 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  6 04:40:17 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec  6 04:40:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:17 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  6 04:40:18 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec  6 04:40:18 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec  6 04:40:18 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  6 04:40:18 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  6 04:40:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v88: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  6 04:40:18 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:18 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : monmap epoch 2
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : last_changed 2025-12-06T09:40:10.449868+0000
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : created 2025-12-06T09:37:38.663870+0000
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap 
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.qhdjwa(active, since 2m)
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Dec  6 04:40:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:19 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec  6 04:40:19 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3524701111' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Dec  6 04:40:19 np0005548915 pensive_payne[85901]: enabled application 'rbd' on pool 'images'
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: Deploying daemon mon.compute-1 on compute-1
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/3524701111' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: mon.compute-0 calling monitor election
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Dec  6 04:40:19 np0005548915 systemd[1]: libpod-0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e.scope: Deactivated successfully.
Dec  6 04:40:19 np0005548915 podman[85885]: 2025-12-06 09:40:19.249657634 +0000 UTC m=+10.714230137 container died 0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e (image=quay.io/ceph/ceph:v19, name=pensive_payne, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  6 04:40:19 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  6 04:40:19 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:19 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:40:19 np0005548915 systemd[1]: var-lib-containers-storage-overlay-36ccd2f50842bd7928969aa56bc15b6dc8f9512af1bb94519f4816e3b58c66e1-merged.mount: Deactivated successfully.
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:19 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev f9d57ea0-0593-4d0f-83be-a56b20ce3d10 (Updating mon deployment (+2 -> 3))
Dec  6 04:40:19 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event f9d57ea0-0593-4d0f-83be-a56b20ce3d10 (Updating mon deployment (+2 -> 3)) in 13 seconds
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:19 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 37639f43-4dce-4807-8a49-da327e3558b8 (Updating mgr deployment (+2 -> 3))
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  6 04:40:19 np0005548915 podman[85885]: 2025-12-06 09:40:19.895857248 +0000 UTC m=+11.360429741 container remove 0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e (image=quay.io/ceph/ceph:v19, name=pensive_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:40:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:40:19 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.oazbvn on compute-2
Dec  6 04:40:19 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.oazbvn on compute-2
Dec  6 04:40:19 np0005548915 systemd[1]: libpod-conmon-0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e.scope: Deactivated successfully.
Dec  6 04:40:20 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Dec  6 04:40:20 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Dec  6 04:40:20 np0005548915 python3[85963]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:20 np0005548915 podman[85964]: 2025-12-06 09:40:20.297208294 +0000 UTC m=+0.049379103 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:20 np0005548915 podman[85964]: 2025-12-06 09:40:20.446705572 +0000 UTC m=+0.198876401 container create 77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f (image=quay.io/ceph/ceph:v19, name=friendly_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: mon.compute-2 calling monitor election
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Dec  6 04:40:20 np0005548915 ceph-mon[74327]:    application not enabled on pool 'images'
Dec  6 04:40:20 np0005548915 ceph-mon[74327]:    application not enabled on pool 'cephfs.cephfs.meta'
Dec  6 04:40:20 np0005548915 ceph-mon[74327]:    application not enabled on pool 'cephfs.cephfs.data'
Dec  6 04:40:20 np0005548915 ceph-mon[74327]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/3524701111' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  6 04:40:20 np0005548915 ceph-mgr[74618]: mgr.server handle_report got status from non-daemon mon.compute-2
Dec  6 04:40:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:40:20.454+0000 7f8d54bf6640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Dec  6 04:40:20 np0005548915 systemd[1]: Started libpod-conmon-77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f.scope.
Dec  6 04:40:20 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:20 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fbd7bbef96ba7d16884585bd199c0a8c583e9b1ed13a478a9c55a01936fcda6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:20 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fbd7bbef96ba7d16884585bd199c0a8c583e9b1ed13a478a9c55a01936fcda6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v90: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:20 np0005548915 podman[85964]: 2025-12-06 09:40:20.59374977 +0000 UTC m=+0.345920569 container init 77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f (image=quay.io/ceph/ceph:v19, name=friendly_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:40:20 np0005548915 podman[85964]: 2025-12-06 09:40:20.600026864 +0000 UTC m=+0.352197673 container start 77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f (image=quay.io/ceph/ceph:v19, name=friendly_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  6 04:40:20 np0005548915 podman[85964]: 2025-12-06 09:40:20.604492378 +0000 UTC m=+0.356663167 container attach 77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f (image=quay.io/ceph/ceph:v19, name=friendly_solomon, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Dec  6 04:40:20 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:20 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec  6 04:40:20 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: paxos.0).electionLogic(10) init, last seen epoch 10
Dec  6 04:40:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 04:40:21 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec  6 04:40:21 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec  6 04:40:21 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec  6 04:40:21 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1898003818' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  6 04:40:21 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec  6 04:40:21 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:21 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:21 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  6 04:40:21 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  6 04:40:21 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:40:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  6 04:40:22 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec  6 04:40:22 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec  6 04:40:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  6 04:40:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v91: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:22 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec  6 04:40:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:22 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:22 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  6 04:40:23 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 7 completed events
Dec  6 04:40:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:40:23 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec  6 04:40:23 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec  6 04:40:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  6 04:40:23 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec  6 04:40:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:23 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  6 04:40:24 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec  6 04:40:24 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec  6 04:40:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v92: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  6 04:40:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  6 04:40:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  6 04:40:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  6 04:40:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:24 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec  6 04:40:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:24 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  6 04:40:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  6 04:40:25 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec  6 04:40:25 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  6 04:40:25 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:25 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : monmap epoch 3
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : last_changed 2025-12-06T09:40:20.714037+0000
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : created 2025-12-06T09:37:38.663870+0000
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap 
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.qhdjwa(active, since 2m)
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Dec  6 04:40:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.e deep-scrub starts
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.e deep-scrub ok
Dec  6 04:40:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v93: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1898003818' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Dec  6 04:40:26 np0005548915 friendly_solomon[85980]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: mon.compute-0 calling monitor election
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: mon.compute-2 calling monitor election
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1898003818' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Dec  6 04:40:26 np0005548915 ceph-mon[74327]:    application not enabled on pool 'images'
Dec  6 04:40:26 np0005548915 ceph-mon[74327]:    application not enabled on pool 'cephfs.cephfs.meta'
Dec  6 04:40:26 np0005548915 ceph-mon[74327]:    application not enabled on pool 'cephfs.cephfs.data'
Dec  6 04:40:26 np0005548915 ceph-mon[74327]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Dec  6 04:40:26 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event f21b6bbe-67b9-44d9-b566-03cc3ef21868 (Global Recovery Event) in 19 seconds
Dec  6 04:40:26 np0005548915 systemd[1]: libpod-77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f.scope: Deactivated successfully.
Dec  6 04:40:26 np0005548915 podman[85964]: 2025-12-06 09:40:26.589624301 +0000 UTC m=+6.341795100 container died 77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f (image=quay.io/ceph/ceph:v19, name=friendly_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.18( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.18( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.1a( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.1b( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.1c( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.1b( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.1a( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.1c( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.1a( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.c( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.e( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.f( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.e( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.1( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.1( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.2( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.7( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.4( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.5( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.d( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.a( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.d( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.8( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.9( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.9( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.16( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.10( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.13( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.15( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.15( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.13( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.14( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.11( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.10( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.1f( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.1f( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.552670479s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.480159760s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.552635193s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.480159760s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.552288055s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479991913s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.552268028s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479991913s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.551779747s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.480148315s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.551755905s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.480148315s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.551189423s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479732513s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.551174164s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479732513s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550840378s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479709625s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550812721s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479709625s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550696373s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479644775s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550770760s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479736328s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550655365s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479644775s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550718307s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479736328s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550441742s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479530334s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550426483s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479530334s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550407410s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479534149s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550385475s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479534149s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550285339s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479450226s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550263405s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479450226s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550206184s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479431152s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550192833s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479431152s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550135612s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479404449s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550116539s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479404449s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550026894s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479393005s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550013542s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479393005s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550060272s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479457855s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550036430s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479457855s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550024033s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479457855s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:40:26 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550000191s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479457855s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:40:26 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1fbd7bbef96ba7d16884585bd199c0a8c583e9b1ed13a478a9c55a01936fcda6-merged.mount: Deactivated successfully.
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:26 np0005548915 podman[85964]: 2025-12-06 09:40:26.632286054 +0000 UTC m=+6.384456853 container remove 77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f (image=quay.io/ceph/ceph:v19, name=friendly_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.sauzid", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.sauzid", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  6 04:40:26 np0005548915 systemd[1]: libpod-conmon-77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f.scope: Deactivated successfully.
Dec  6 04:40:26 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.sauzid", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:40:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:40:26 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.sauzid on compute-1
Dec  6 04:40:26 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.sauzid on compute-1
Dec  6 04:40:26 np0005548915 python3[86042]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:27 np0005548915 podman[86043]: 2025-12-06 09:40:27.0198014 +0000 UTC m=+0.052717170 container create 34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b (image=quay.io/ceph/ceph:v19, name=wonderful_ride, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:40:27 np0005548915 systemd[1]: Started libpod-conmon-34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b.scope.
Dec  6 04:40:27 np0005548915 podman[86043]: 2025-12-06 09:40:26.998993446 +0000 UTC m=+0.031909246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:27 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86701212fc2b0f61d2e1f69c2c47b98f2c6c9a1f9326e5e5273f8ddc75b23396/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86701212fc2b0f61d2e1f69c2c47b98f2c6c9a1f9326e5e5273f8ddc75b23396/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:27 np0005548915 podman[86043]: 2025-12-06 09:40:27.116397033 +0000 UTC m=+0.149312823 container init 34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b (image=quay.io/ceph/ceph:v19, name=wonderful_ride, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  6 04:40:27 np0005548915 podman[86043]: 2025-12-06 09:40:27.127034238 +0000 UTC m=+0.159950008 container start 34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b (image=quay.io/ceph/ceph:v19, name=wonderful_ride, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  6 04:40:27 np0005548915 podman[86043]: 2025-12-06 09:40:27.13016292 +0000 UTC m=+0.163078690 container attach 34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b (image=quay.io/ceph/ceph:v19, name=wonderful_ride, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/21529314' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec  6 04:40:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:40:27.718+0000 7f8d54bf6640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Dec  6 04:40:27 np0005548915 ceph-mgr[74618]: mgr.server handle_report got status from non-daemon mon.compute-1
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/21529314' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Dec  6 04:40:27 np0005548915 wonderful_ride[86060]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.1f( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.1f( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.10( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.14( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.13( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.11( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.15( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.13( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.10( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.16( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.15( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.9( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.9( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.e( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.8( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.c( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.f( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.a( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.d( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.d( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.a( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.5( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.4( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.7( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.2( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.11( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.3( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.1( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.5( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.e( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.9( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.1( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.c( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.e( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.1a( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.1c( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.16( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.1a( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.1b( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.1b( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.f( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.1c( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.1a( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.1d( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.18( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.18( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.15( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: mon.compute-1 calling monitor election
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1898003818' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.sauzid", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.sauzid", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  6 04:40:27 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/21529314' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  6 04:40:27 np0005548915 systemd[1]: libpod-34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b.scope: Deactivated successfully.
Dec  6 04:40:27 np0005548915 podman[86043]: 2025-12-06 09:40:27.869567917 +0000 UTC m=+0.902483727 container died 34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b (image=quay.io/ceph/ceph:v19, name=wonderful_ride, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  6 04:40:27 np0005548915 systemd[1]: var-lib-containers-storage-overlay-86701212fc2b0f61d2e1f69c2c47b98f2c6c9a1f9326e5e5273f8ddc75b23396-merged.mount: Deactivated successfully.
Dec  6 04:40:27 np0005548915 podman[86043]: 2025-12-06 09:40:27.926351948 +0000 UTC m=+0.959267738 container remove 34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b (image=quay.io/ceph/ceph:v19, name=wonderful_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 04:40:27 np0005548915 systemd[1]: libpod-conmon-34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b.scope: Deactivated successfully.
Dec  6 04:40:28 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec  6 04:40:28 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec  6 04:40:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v96: 131 pgs: 47 peering, 84 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:28 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Dec  6 04:40:28 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  6 04:40:28 np0005548915 ceph-mon[74327]: Deploying daemon mgr.compute-1.sauzid on compute-1
Dec  6 04:40:28 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:28 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:28 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:28 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:40:28 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/21529314' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  6 04:40:28 np0005548915 python3[86171]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:40:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:40:29 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec  6 04:40:29 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec  6 04:40:29 np0005548915 python3[86242]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014028.6305497-37195-124427659970167/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:40:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:40:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:40:29 np0005548915 python3[86344]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:40:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  6 04:40:30 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec  6 04:40:30 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec  6 04:40:30 np0005548915 python3[86419]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014029.686952-37209-84785446435248/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=31fd15111dbd1a80f398078c01d166287a76fc4d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:40:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v97: 131 pgs: 47 peering, 84 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:30 np0005548915 python3[86469]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:31 np0005548915 podman[86470]: 2025-12-06 09:40:30.908221852 +0000 UTC m=+0.039591235 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:31 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec  6 04:40:31 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec  6 04:40:31 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 8 completed events
Dec  6 04:40:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:40:31 np0005548915 podman[86470]: 2025-12-06 09:40:31.764232671 +0000 UTC m=+0.895601994 container create 66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c (image=quay.io/ceph/ceph:v19, name=interesting_roentgen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:40:32 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.12 deep-scrub starts
Dec  6 04:40:32 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mgr.compute-2.oazbvn 192.168.122.102:0/242837708; not ready for session (expect reconnect)
Dec  6 04:40:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v98: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:32 np0005548915 systemd[1]: Started libpod-conmon-66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c.scope.
Dec  6 04:40:32 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:32 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9025365e368256040817794260774b0fa90f66f568331acc8f5309ffdb81f9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:32 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9025365e368256040817794260774b0fa90f66f568331acc8f5309ffdb81f9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:32 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9025365e368256040817794260774b0fa90f66f568331acc8f5309ffdb81f9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:32 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.12 deep-scrub ok
Dec  6 04:40:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:40:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:40:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:40:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:40:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:40:33 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:40:33 np0005548915 podman[86470]: 2025-12-06 09:40:33.076515994 +0000 UTC m=+2.207885367 container init 66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c (image=quay.io/ceph/ceph:v19, name=interesting_roentgen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  6 04:40:33 np0005548915 podman[86470]: 2025-12-06 09:40:33.090109995 +0000 UTC m=+2.221479328 container start 66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c (image=quay.io/ceph/ceph:v19, name=interesting_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:40:33 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec  6 04:40:33 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mgr.compute-2.oazbvn 192.168.122.102:0/242837708; not ready for session (expect reconnect)
Dec  6 04:40:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  6 04:40:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2318794964' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  6 04:40:33 np0005548915 podman[86470]: 2025-12-06 09:40:33.650922861 +0000 UTC m=+2.782292204 container attach 66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c (image=quay.io/ceph/ceph:v19, name=interesting_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Dec  6 04:40:33 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec  6 04:40:33 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn started
Dec  6 04:40:33 np0005548915 ceph-mon[74327]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Dec  6 04:40:33 np0005548915 ceph-mon[74327]: Cluster is now healthy
Dec  6 04:40:33 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:34 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 37639f43-4dce-4807-8a49-da327e3558b8 (Updating mgr deployment (+2 -> 3))
Dec  6 04:40:34 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 37639f43-4dce-4807-8a49-da327e3558b8 (Updating mgr deployment (+2 -> 3)) in 14 seconds
Dec  6 04:40:34 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.qhdjwa(active, since 2m), standbys: compute-2.oazbvn
Dec  6 04:40:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"} v 0)
Dec  6 04:40:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"}]: dispatch
Dec  6 04:40:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  6 04:40:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2318794964' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  6 04:40:34 np0005548915 interesting_roentgen[86485]: 
Dec  6 04:40:34 np0005548915 interesting_roentgen[86485]: [global]
Dec  6 04:40:34 np0005548915 interesting_roentgen[86485]: #011fsid = 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec  6 04:40:34 np0005548915 interesting_roentgen[86485]: #011mon_host = 192.168.122.100
Dec  6 04:40:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:34 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 5bba6670-7dce-4123-9ffd-b3a9f0458b17 (Updating crash deployment (+1 -> 3))
Dec  6 04:40:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  6 04:40:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  6 04:40:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  6 04:40:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:40:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:40:34 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Dec  6 04:40:34 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Dec  6 04:40:34 np0005548915 systemd[1]: libpod-66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c.scope: Deactivated successfully.
Dec  6 04:40:34 np0005548915 podman[86470]: 2025-12-06 09:40:34.206143957 +0000 UTC m=+3.337513290 container died 66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c (image=quay.io/ceph/ceph:v19, name=interesting_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:40:34 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec  6 04:40:34 np0005548915 systemd[1]: var-lib-containers-storage-overlay-9d9025365e368256040817794260774b0fa90f66f568331acc8f5309ffdb81f9-merged.mount: Deactivated successfully.
Dec  6 04:40:34 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec  6 04:40:34 np0005548915 podman[86470]: 2025-12-06 09:40:34.251328102 +0000 UTC m=+3.382697395 container remove 66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c (image=quay.io/ceph/ceph:v19, name=interesting_roentgen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:40:34 np0005548915 systemd[1]: libpod-conmon-66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c.scope: Deactivated successfully.
Dec  6 04:40:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v99: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:34 np0005548915 python3[86547]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:34 np0005548915 podman[86548]: 2025-12-06 09:40:34.704511677 +0000 UTC m=+0.038582032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:40:35 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec  6 04:40:35 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec  6 04:40:36 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec  6 04:40:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v100: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:36 np0005548915 podman[86548]: 2025-12-06 09:40:36.775156534 +0000 UTC m=+2.109226879 container create b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602 (image=quay.io/ceph/ceph:v19, name=stupefied_black, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  6 04:40:37 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec  6 04:40:37 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Dec  6 04:40:37 np0005548915 systemd[1]: Started libpod-conmon-b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602.scope.
Dec  6 04:40:37 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0196037e9fd97df5b2cf3a74ec04d08922bf680c57cd46d27d833c6035ef6bd7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0196037e9fd97df5b2cf3a74ec04d08922bf680c57cd46d27d833c6035ef6bd7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0196037e9fd97df5b2cf3a74ec04d08922bf680c57cd46d27d833c6035ef6bd7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:38 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Dec  6 04:40:38 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mgr.compute-1.sauzid 192.168.122.101:0/1218376604; not ready for session (expect reconnect)
Dec  6 04:40:38 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec  6 04:40:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v101: 131 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 129 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:38 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:38 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2318794964' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  6 04:40:38 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:38 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:38 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2318794964' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  6 04:40:38 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:38 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  6 04:40:38 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  6 04:40:38 np0005548915 ceph-mon[74327]: Deploying daemon crash.compute-2 on compute-2
Dec  6 04:40:39 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mgr.compute-1.sauzid 192.168.122.101:0/1218376604; not ready for session (expect reconnect)
Dec  6 04:40:39 np0005548915 podman[86548]: 2025-12-06 09:40:39.23039136 +0000 UTC m=+4.564461705 container init b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602 (image=quay.io/ceph/ceph:v19, name=stupefied_black, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  6 04:40:39 np0005548915 podman[86548]: 2025-12-06 09:40:39.242037568 +0000 UTC m=+4.576107913 container start b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602 (image=quay.io/ceph/ceph:v19, name=stupefied_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  6 04:40:39 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec  6 04:40:39 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec  6 04:40:40 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mgr.compute-1.sauzid 192.168.122.101:0/1218376604; not ready for session (expect reconnect)
Dec  6 04:40:40 np0005548915 podman[86548]: 2025-12-06 09:40:40.229815579 +0000 UTC m=+5.563885924 container attach b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602 (image=quay.io/ceph/ceph:v19, name=stupefied_black, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:40:40 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid started
Dec  6 04:40:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:40:40 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec  6 04:40:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v102: 131 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 129 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec  6 04:40:40 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec  6 04:40:40 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec  6 04:40:41 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from mgr.compute-1.sauzid 192.168.122.101:0/1218376604; not ready for session (expect reconnect)
Dec  6 04:40:41 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec  6 04:40:41 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec  6 04:40:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1940510154' entity='client.admin' 
Dec  6 04:40:41 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.qhdjwa(active, since 2m), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:40:41 np0005548915 stupefied_black[86563]: set ssl_option
Dec  6 04:40:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"} v 0)
Dec  6 04:40:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"}]: dispatch
Dec  6 04:40:41 np0005548915 systemd[1]: libpod-b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602.scope: Deactivated successfully.
Dec  6 04:40:41 np0005548915 podman[86548]: 2025-12-06 09:40:41.956998827 +0000 UTC m=+7.291069172 container died b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602 (image=quay.io/ceph/ceph:v19, name=stupefied_black, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  6 04:40:41 np0005548915 systemd[1]: var-lib-containers-storage-overlay-0196037e9fd97df5b2cf3a74ec04d08922bf680c57cd46d27d833c6035ef6bd7-merged.mount: Deactivated successfully.
Dec  6 04:40:42 np0005548915 podman[86548]: 2025-12-06 09:40:42.000005792 +0000 UTC m=+7.334076117 container remove b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602 (image=quay.io/ceph/ceph:v19, name=stupefied_black, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:40:42 np0005548915 systemd[1]: libpod-conmon-b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602.scope: Deactivated successfully.
Dec  6 04:40:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:40:42 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:40:42 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  6 04:40:42 np0005548915 python3[86624]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:42 np0005548915 podman[86625]: 2025-12-06 09:40:42.4402982 +0000 UTC m=+0.042279242 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v103: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:42 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:42 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 5bba6670-7dce-4123-9ffd-b3a9f0458b17 (Updating crash deployment (+1 -> 3))
Dec  6 04:40:42 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 5bba6670-7dce-4123-9ffd-b3a9f0458b17 (Updating crash deployment (+1 -> 3)) in 9 seconds
Dec  6 04:40:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  6 04:40:42 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec  6 04:40:42 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec  6 04:40:43 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec  6 04:40:44 np0005548915 podman[86625]: 2025-12-06 09:40:44.105753086 +0000 UTC m=+1.707734098 container create f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62 (image=quay.io/ceph/ceph:v19, name=sharp_jackson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:40:44 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 10 completed events
Dec  6 04:40:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:40:44 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec  6 04:40:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v104: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:44 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:40:45 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:40:45 np0005548915 systemd[1]: Started libpod-conmon-f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62.scope.
Dec  6 04:40:45 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8213fc13aac8e0369b14faca6578236da723c08a33eb5f106bf60431e2da3d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8213fc13aac8e0369b14faca6578236da723c08a33eb5f106bf60431e2da3d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8213fc13aac8e0369b14faca6578236da723c08a33eb5f106bf60431e2da3d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:45 np0005548915 podman[86625]: 2025-12-06 09:40:45.188362302 +0000 UTC m=+2.790343404 container init f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62 (image=quay.io/ceph/ceph:v19, name=sharp_jackson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:40:45 np0005548915 podman[86625]: 2025-12-06 09:40:45.196147815 +0000 UTC m=+2.798128837 container start f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62 (image=quay.io/ceph/ceph:v19, name=sharp_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:40:45 np0005548915 podman[86625]: 2025-12-06 09:40:45.199833775 +0000 UTC m=+2.801814837 container attach f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62 (image=quay.io/ceph/ceph:v19, name=sharp_jackson, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1940510154' entity='client.admin' 
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:45 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14241 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:40:45 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  6 04:40:45 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  6 04:40:45 np0005548915 podman[86753]: 2025-12-06 09:40:45.604080543 +0000 UTC m=+0.021876470 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:40:45 np0005548915 podman[86753]: 2025-12-06 09:40:45.848370995 +0000 UTC m=+0.266166932 container create a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  6 04:40:45 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:45 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Dec  6 04:40:45 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Dec  6 04:40:45 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  6 04:40:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:45 np0005548915 sharp_jackson[86643]: Scheduled rgw.rgw update...
Dec  6 04:40:45 np0005548915 sharp_jackson[86643]: Scheduled ingress.rgw.default update...
Dec  6 04:40:45 np0005548915 systemd[1]: Started libpod-conmon-a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869.scope.
Dec  6 04:40:45 np0005548915 podman[86625]: 2025-12-06 09:40:45.904136353 +0000 UTC m=+3.506117435 container died f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62 (image=quay.io/ceph/ceph:v19, name=sharp_jackson, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Dec  6 04:40:45 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:45 np0005548915 systemd[1]: libpod-f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62.scope: Deactivated successfully.
Dec  6 04:40:45 np0005548915 systemd[1]: var-lib-containers-storage-overlay-bb8213fc13aac8e0369b14faca6578236da723c08a33eb5f106bf60431e2da3d-merged.mount: Deactivated successfully.
Dec  6 04:40:45 np0005548915 podman[86753]: 2025-12-06 09:40:45.941509855 +0000 UTC m=+0.359305802 container init a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wiles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  6 04:40:45 np0005548915 podman[86753]: 2025-12-06 09:40:45.947630594 +0000 UTC m=+0.365426541 container start a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:40:45 np0005548915 charming_wiles[86770]: 167 167
Dec  6 04:40:45 np0005548915 systemd[1]: libpod-a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869.scope: Deactivated successfully.
Dec  6 04:40:45 np0005548915 podman[86753]: 2025-12-06 09:40:45.960346345 +0000 UTC m=+0.378142272 container attach a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:45 np0005548915 podman[86753]: 2025-12-06 09:40:45.960847402 +0000 UTC m=+0.378643319 container died a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  6 04:40:45 np0005548915 podman[86625]: 2025-12-06 09:40:45.979674972 +0000 UTC m=+3.581655984 container remove f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62 (image=quay.io/ceph/ceph:v19, name=sharp_jackson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Dec  6 04:40:45 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ded9492e258cff351827ceaa600f8636e068d8ba251fd16408ac5b8f0e3c105e-merged.mount: Deactivated successfully.
Dec  6 04:40:46 np0005548915 podman[86753]: 2025-12-06 09:40:46.006442901 +0000 UTC m=+0.424238818 container remove a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wiles, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:40:46 np0005548915 systemd[1]: libpod-conmon-a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869.scope: Deactivated successfully.
Dec  6 04:40:46 np0005548915 systemd[1]: libpod-conmon-f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62.scope: Deactivated successfully.
Dec  6 04:40:46 np0005548915 podman[86804]: 2025-12-06 09:40:46.153339864 +0000 UTC m=+0.039818342 container create 39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  6 04:40:46 np0005548915 systemd[1]: Started libpod-conmon-39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326.scope.
Dec  6 04:40:46 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f7cb6e5840602599acab0e715b5b85b778037f99ccbed5af4f40315d8d2571/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f7cb6e5840602599acab0e715b5b85b778037f99ccbed5af4f40315d8d2571/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f7cb6e5840602599acab0e715b5b85b778037f99ccbed5af4f40315d8d2571/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f7cb6e5840602599acab0e715b5b85b778037f99ccbed5af4f40315d8d2571/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f7cb6e5840602599acab0e715b5b85b778037f99ccbed5af4f40315d8d2571/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:46 np0005548915 podman[86804]: 2025-12-06 09:40:46.225455953 +0000 UTC m=+0.111934441 container init 39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 04:40:46 np0005548915 podman[86804]: 2025-12-06 09:40:46.137325385 +0000 UTC m=+0.023803883 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:40:46 np0005548915 podman[86804]: 2025-12-06 09:40:46.233558675 +0000 UTC m=+0.120037143 container start 39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:40:46 np0005548915 podman[86804]: 2025-12-06 09:40:46.236865792 +0000 UTC m=+0.123344270 container attach 39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:46 np0005548915 python3[86901]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:40:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v105: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:46 np0005548915 determined_visvesvaraya[86857]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:40:46 np0005548915 determined_visvesvaraya[86857]: --> All data devices are unavailable
Dec  6 04:40:46 np0005548915 systemd[1]: libpod-39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326.scope: Deactivated successfully.
Dec  6 04:40:46 np0005548915 podman[86804]: 2025-12-06 09:40:46.615628925 +0000 UTC m=+0.502107463 container died 39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:40:46 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec  6 04:40:46 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec  6 04:40:47 np0005548915 python3[86992]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014046.16402-37228-257896791076356/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:40:47 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:47 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:40:47 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:40:47 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:47 np0005548915 ceph-mon[74327]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  6 04:40:47 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:47 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:47 np0005548915 systemd[1]: var-lib-containers-storage-overlay-18f7cb6e5840602599acab0e715b5b85b778037f99ccbed5af4f40315d8d2571-merged.mount: Deactivated successfully.
Dec  6 04:40:47 np0005548915 podman[86804]: 2025-12-06 09:40:47.805372455 +0000 UTC m=+1.691850943 container remove 39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:47 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec  6 04:40:47 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec  6 04:40:47 np0005548915 systemd[1]: libpod-conmon-39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326.scope: Deactivated successfully.
Dec  6 04:40:48 np0005548915 python3[87107]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:48 np0005548915 podman[87130]: 2025-12-06 09:40:48.32637056 +0000 UTC m=+0.042525560 container create af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:40:48 np0005548915 systemd[1]: Started libpod-conmon-af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937.scope.
Dec  6 04:40:48 np0005548915 podman[87144]: 2025-12-06 09:40:48.373825399 +0000 UTC m=+0.043776691 container create be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0 (image=quay.io/ceph/ceph:v19, name=eager_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  6 04:40:48 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:48 np0005548915 systemd[1]: Started libpod-conmon-be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0.scope.
Dec  6 04:40:48 np0005548915 podman[87130]: 2025-12-06 09:40:48.400739932 +0000 UTC m=+0.116894922 container init af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:40:48 np0005548915 podman[87130]: 2025-12-06 09:40:48.30537589 +0000 UTC m=+0.021530890 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:40:48 np0005548915 podman[87130]: 2025-12-06 09:40:48.406573011 +0000 UTC m=+0.122727981 container start af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:40:48 np0005548915 podman[87130]: 2025-12-06 09:40:48.409838677 +0000 UTC m=+0.125993657 container attach af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:40:48 np0005548915 serene_germain[87160]: 167 167
Dec  6 04:40:48 np0005548915 podman[87130]: 2025-12-06 09:40:48.412060179 +0000 UTC m=+0.128215159 container died af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  6 04:40:48 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:48 np0005548915 systemd[1]: libpod-af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937.scope: Deactivated successfully.
Dec  6 04:40:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea536cd35e11a73088c1194886e1086c4d9b5c294098f50cb2411e3b4f72787/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea536cd35e11a73088c1194886e1086c4d9b5c294098f50cb2411e3b4f72787/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea536cd35e11a73088c1194886e1086c4d9b5c294098f50cb2411e3b4f72787/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:48 np0005548915 podman[87144]: 2025-12-06 09:40:48.448919524 +0000 UTC m=+0.118870846 container init be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0 (image=quay.io/ceph/ceph:v19, name=eager_fermat, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:48 np0005548915 podman[87144]: 2025-12-06 09:40:48.354660407 +0000 UTC m=+0.024611749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:48 np0005548915 systemd[1]: var-lib-containers-storage-overlay-83a01a01761defafc008b21797370d6d33091f1cc36e69bd4875225bf814724f-merged.mount: Deactivated successfully.
Dec  6 04:40:48 np0005548915 podman[87144]: 2025-12-06 09:40:48.455931651 +0000 UTC m=+0.125882963 container start be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0 (image=quay.io/ceph/ceph:v19, name=eager_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  6 04:40:48 np0005548915 podman[87130]: 2025-12-06 09:40:48.478211944 +0000 UTC m=+0.194366934 container remove af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:40:48 np0005548915 systemd[1]: libpod-conmon-af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937.scope: Deactivated successfully.
Dec  6 04:40:48 np0005548915 podman[87144]: 2025-12-06 09:40:48.495191544 +0000 UTC m=+0.165142846 container attach be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0 (image=quay.io/ceph/ceph:v19, name=eager_fermat, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  6 04:40:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v106: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:48 np0005548915 podman[87209]: 2025-12-06 09:40:48.673670422 +0000 UTC m=+0.045894929 container create 7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_benz, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 04:40:48 np0005548915 systemd[1]: Started libpod-conmon-7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2.scope.
Dec  6 04:40:48 np0005548915 podman[87209]: 2025-12-06 09:40:48.65512134 +0000 UTC m=+0.027345887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:40:48 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1fe347a38eedce6b8bbacdcf30afa6d1c2fd53c8c4858bdbb7629b77ff589c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1fe347a38eedce6b8bbacdcf30afa6d1c2fd53c8c4858bdbb7629b77ff589c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1fe347a38eedce6b8bbacdcf30afa6d1c2fd53c8c4858bdbb7629b77ff589c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1fe347a38eedce6b8bbacdcf30afa6d1c2fd53c8c4858bdbb7629b77ff589c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:48 np0005548915 podman[87209]: 2025-12-06 09:40:48.774012156 +0000 UTC m=+0.146236683 container init 7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_benz, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:40:48 np0005548915 ceph-mon[74327]: Saving service ingress.rgw.default spec with placement count:2
Dec  6 04:40:48 np0005548915 podman[87209]: 2025-12-06 09:40:48.78092641 +0000 UTC m=+0.153150907 container start 7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:40:48 np0005548915 podman[87209]: 2025-12-06 09:40:48.785342114 +0000 UTC m=+0.157566651 container attach 7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_benz, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:40:48 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec  6 04:40:48 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec  6 04:40:48 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14247 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:40:48 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service node-exporter spec with placement *
Dec  6 04:40:48 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Dec  6 04:40:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  6 04:40:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:48 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Dec  6 04:40:48 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Dec  6 04:40:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec  6 04:40:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:48 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Dec  6 04:40:48 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Dec  6 04:40:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec  6 04:40:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:48 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Dec  6 04:40:48 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Dec  6 04:40:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec  6 04:40:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:48 np0005548915 eager_fermat[87165]: Scheduled node-exporter update...
Dec  6 04:40:48 np0005548915 eager_fermat[87165]: Scheduled grafana update...
Dec  6 04:40:48 np0005548915 eager_fermat[87165]: Scheduled prometheus update...
Dec  6 04:40:48 np0005548915 eager_fermat[87165]: Scheduled alertmanager update...
Dec  6 04:40:48 np0005548915 systemd[1]: libpod-be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0.scope: Deactivated successfully.
Dec  6 04:40:48 np0005548915 podman[87144]: 2025-12-06 09:40:48.985838255 +0000 UTC m=+0.655789577 container died be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0 (image=quay.io/ceph/ceph:v19, name=eager_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "b46cc65b-25ba-490a-8b8e-91e4407f3aed"} v 0)
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b46cc65b-25ba-490a-8b8e-91e4407f3aed"}]: dispatch
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec  6 04:40:49 np0005548915 systemd[1]: var-lib-containers-storage-overlay-bea536cd35e11a73088c1194886e1086c4d9b5c294098f50cb2411e3b4f72787-merged.mount: Deactivated successfully.
Dec  6 04:40:49 np0005548915 podman[87144]: 2025-12-06 09:40:49.023716503 +0000 UTC m=+0.693667795 container remove be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0 (image=quay.io/ceph/ceph:v19, name=eager_fermat, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 04:40:49 np0005548915 systemd[1]: libpod-conmon-be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0.scope: Deactivated successfully.
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b46cc65b-25ba-490a-8b8e-91e4407f3aed"}]': finished
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  6 04:40:49 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  6 04:40:49 np0005548915 charming_benz[87226]: {
Dec  6 04:40:49 np0005548915 charming_benz[87226]:    "1": [
Dec  6 04:40:49 np0005548915 charming_benz[87226]:        {
Dec  6 04:40:49 np0005548915 charming_benz[87226]:            "devices": [
Dec  6 04:40:49 np0005548915 charming_benz[87226]:                "/dev/loop3"
Dec  6 04:40:49 np0005548915 charming_benz[87226]:            ],
Dec  6 04:40:49 np0005548915 charming_benz[87226]:            "lv_name": "ceph_lv0",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:            "lv_size": "21470642176",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:            "name": "ceph_lv0",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:            "tags": {
Dec  6 04:40:49 np0005548915 charming_benz[87226]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:                "ceph.cluster_name": "ceph",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:                "ceph.crush_device_class": "",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:                "ceph.encrypted": "0",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:                "ceph.osd_id": "1",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:                "ceph.type": "block",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:                "ceph.vdo": "0",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:                "ceph.with_tpm": "0"
Dec  6 04:40:49 np0005548915 charming_benz[87226]:            },
Dec  6 04:40:49 np0005548915 charming_benz[87226]:            "type": "block",
Dec  6 04:40:49 np0005548915 charming_benz[87226]:            "vg_name": "ceph_vg0"
Dec  6 04:40:49 np0005548915 charming_benz[87226]:        }
Dec  6 04:40:49 np0005548915 charming_benz[87226]:    ]
Dec  6 04:40:49 np0005548915 charming_benz[87226]: }
Dec  6 04:40:49 np0005548915 systemd[1]: libpod-7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2.scope: Deactivated successfully.
Dec  6 04:40:49 np0005548915 conmon[87226]: conmon 7ba1f5b9a5716b99d010 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2.scope/container/memory.events
Dec  6 04:40:49 np0005548915 podman[87209]: 2025-12-06 09:40:49.085325171 +0000 UTC m=+0.457549678 container died 7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_benz, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 04:40:49 np0005548915 systemd[1]: var-lib-containers-storage-overlay-0c1fe347a38eedce6b8bbacdcf30afa6d1c2fd53c8c4858bdbb7629b77ff589c-merged.mount: Deactivated successfully.
Dec  6 04:40:49 np0005548915 podman[87209]: 2025-12-06 09:40:49.137245735 +0000 UTC m=+0.509470232 container remove 7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:40:49 np0005548915 systemd[1]: libpod-conmon-7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2.scope: Deactivated successfully.
Dec  6 04:40:49 np0005548915 python3[87336]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:49 np0005548915 podman[87361]: 2025-12-06 09:40:49.66194871 +0000 UTC m=+0.048436391 container create c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6 (image=quay.io/ceph/ceph:v19, name=nostalgic_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  6 04:40:49 np0005548915 systemd[1]: Started libpod-conmon-c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6.scope.
Dec  6 04:40:49 np0005548915 podman[87388]: 2025-12-06 09:40:49.731140614 +0000 UTC m=+0.041861749 container create 54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  6 04:40:49 np0005548915 podman[87361]: 2025-12-06 09:40:49.637379403 +0000 UTC m=+0.023867114 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:49 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:49 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ae777c3574167fc76f03953c408c010e8dac3f6c1cf37d0b61d031bb73e337/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:49 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ae777c3574167fc76f03953c408c010e8dac3f6c1cf37d0b61d031bb73e337/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:49 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ae777c3574167fc76f03953c408c010e8dac3f6c1cf37d0b61d031bb73e337/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:49 np0005548915 systemd[1]: Started libpod-conmon-54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f.scope.
Dec  6 04:40:49 np0005548915 podman[87361]: 2025-12-06 09:40:49.754820701 +0000 UTC m=+0.141308482 container init c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6 (image=quay.io/ceph/ceph:v19, name=nostalgic_bouman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:40:49 np0005548915 podman[87361]: 2025-12-06 09:40:49.765100165 +0000 UTC m=+0.151587836 container start c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6 (image=quay.io/ceph/ceph:v19, name=nostalgic_bouman, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 04:40:49 np0005548915 podman[87361]: 2025-12-06 09:40:49.768371131 +0000 UTC m=+0.154858892 container attach c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6 (image=quay.io/ceph/ceph:v19, name=nostalgic_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:49 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:49 np0005548915 podman[87388]: 2025-12-06 09:40:49.80133232 +0000 UTC m=+0.112053455 container init 54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:40:49 np0005548915 podman[87388]: 2025-12-06 09:40:49.808323956 +0000 UTC m=+0.119045101 container start 54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:40:49 np0005548915 podman[87388]: 2025-12-06 09:40:49.71346193 +0000 UTC m=+0.024183115 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:40:49 np0005548915 practical_diffie[87408]: 167 167
Dec  6 04:40:49 np0005548915 systemd[1]: libpod-54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f.scope: Deactivated successfully.
Dec  6 04:40:49 np0005548915 podman[87388]: 2025-12-06 09:40:49.812000095 +0000 UTC m=+0.122721230 container attach 54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:40:49 np0005548915 podman[87388]: 2025-12-06 09:40:49.812333907 +0000 UTC m=+0.123055042 container died 54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:49 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec  6 04:40:49 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec  6 04:40:49 np0005548915 systemd[1]: var-lib-containers-storage-overlay-9f74f7ed9a237a607cd5619984ddff9477199a302b090a6901c8fcbc69ba89fa-merged.mount: Deactivated successfully.
Dec  6 04:40:49 np0005548915 podman[87388]: 2025-12-06 09:40:49.860248531 +0000 UTC m=+0.170969676 container remove 54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:49 np0005548915 systemd[1]: libpod-conmon-54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f.scope: Deactivated successfully.
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: Saving service node-exporter spec with placement *
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: Saving service grafana spec with placement compute-0;count:1
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: Saving service prometheus spec with placement compute-0;count:1
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: Saving service alertmanager spec with placement compute-0;count:1
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b46cc65b-25ba-490a-8b8e-91e4407f3aed"}]: dispatch
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.102:0/569971095' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b46cc65b-25ba-490a-8b8e-91e4407f3aed"}]: dispatch
Dec  6 04:40:49 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b46cc65b-25ba-490a-8b8e-91e4407f3aed"}]': finished
Dec  6 04:40:50 np0005548915 podman[87452]: 2025-12-06 09:40:50.054876851 +0000 UTC m=+0.063177079 container create 73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:40:50 np0005548915 systemd[1]: Started libpod-conmon-73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334.scope.
Dec  6 04:40:50 np0005548915 podman[87452]: 2025-12-06 09:40:50.024224467 +0000 UTC m=+0.032524765 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:40:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Dec  6 04:40:50 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de0713399881a5c2e9c24687669b0b208121b43f00a698331381c0ef6e65adb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de0713399881a5c2e9c24687669b0b208121b43f00a698331381c0ef6e65adb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de0713399881a5c2e9c24687669b0b208121b43f00a698331381c0ef6e65adb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de0713399881a5c2e9c24687669b0b208121b43f00a698331381c0ef6e65adb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:50 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4267326554' entity='client.admin' 
Dec  6 04:40:50 np0005548915 podman[87452]: 2025-12-06 09:40:50.160184657 +0000 UTC m=+0.168484915 container init 73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:40:50 np0005548915 systemd[1]: libpod-c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6.scope: Deactivated successfully.
Dec  6 04:40:50 np0005548915 podman[87452]: 2025-12-06 09:40:50.170596264 +0000 UTC m=+0.178896462 container start 73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  6 04:40:50 np0005548915 podman[87452]: 2025-12-06 09:40:50.231514579 +0000 UTC m=+0.239814817 container attach 73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 04:40:50 np0005548915 podman[87475]: 2025-12-06 09:40:50.244828971 +0000 UTC m=+0.054111665 container died c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6 (image=quay.io/ceph/ceph:v19, name=nostalgic_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 04:40:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:40:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v108: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:50 np0005548915 systemd[1]: var-lib-containers-storage-overlay-85ae777c3574167fc76f03953c408c010e8dac3f6c1cf37d0b61d031bb73e337-merged.mount: Deactivated successfully.
Dec  6 04:40:50 np0005548915 podman[87475]: 2025-12-06 09:40:50.642637371 +0000 UTC m=+0.451920065 container remove c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6 (image=quay.io/ceph/ceph:v19, name=nostalgic_bouman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  6 04:40:50 np0005548915 systemd[1]: libpod-conmon-c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6.scope: Deactivated successfully.
Dec  6 04:40:50 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec  6 04:40:50 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec  6 04:40:50 np0005548915 lvm[87587]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:40:50 np0005548915 lvm[87587]: VG ceph_vg0 finished
Dec  6 04:40:50 np0005548915 python3[87582]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:51 np0005548915 nostalgic_perlman[87469]: {}
Dec  6 04:40:51 np0005548915 systemd[1]: libpod-73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334.scope: Deactivated successfully.
Dec  6 04:40:51 np0005548915 systemd[1]: libpod-73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334.scope: Consumed 1.485s CPU time.
Dec  6 04:40:51 np0005548915 podman[87452]: 2025-12-06 09:40:51.075044813 +0000 UTC m=+1.083345031 container died 73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_perlman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:40:51 np0005548915 podman[87589]: 2025-12-06 09:40:51.071011482 +0000 UTC m=+0.048087550 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:51 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/4267326554' entity='client.admin' 
Dec  6 04:40:51 np0005548915 systemd[1]: var-lib-containers-storage-overlay-4de0713399881a5c2e9c24687669b0b208121b43f00a698331381c0ef6e65adb-merged.mount: Deactivated successfully.
Dec  6 04:40:51 np0005548915 podman[87452]: 2025-12-06 09:40:51.418765529 +0000 UTC m=+1.427065737 container remove 73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_perlman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:40:51 np0005548915 podman[87589]: 2025-12-06 09:40:51.42589563 +0000 UTC m=+0.402971658 container create 8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204 (image=quay.io/ceph/ceph:v19, name=charming_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:51 np0005548915 systemd[1]: libpod-conmon-73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334.scope: Deactivated successfully.
Dec  6 04:40:51 np0005548915 systemd[1]: Started libpod-conmon-8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204.scope.
Dec  6 04:40:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:40:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:40:51 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:51 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6955734231a935083861f1c21955730e3bcc9f63f3eb934bacfde6f1cae7a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:51 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6955734231a935083861f1c21955730e3bcc9f63f3eb934bacfde6f1cae7a2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:51 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6955734231a935083861f1c21955730e3bcc9f63f3eb934bacfde6f1cae7a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:51 np0005548915 podman[87589]: 2025-12-06 09:40:51.520276581 +0000 UTC m=+0.497352689 container init 8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204 (image=quay.io/ceph/ceph:v19, name=charming_galois, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:40:51 np0005548915 podman[87589]: 2025-12-06 09:40:51.52826236 +0000 UTC m=+0.505338388 container start 8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204 (image=quay.io/ceph/ceph:v19, name=charming_galois, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  6 04:40:51 np0005548915 podman[87589]: 2025-12-06 09:40:51.531860666 +0000 UTC m=+0.508936784 container attach 8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204 (image=quay.io/ceph/ceph:v19, name=charming_galois, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 04:40:51 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Dec  6 04:40:51 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Dec  6 04:40:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Dec  6 04:40:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/821839877' entity='client.admin' 
Dec  6 04:40:51 np0005548915 systemd[1]: libpod-8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204.scope: Deactivated successfully.
Dec  6 04:40:51 np0005548915 podman[87589]: 2025-12-06 09:40:51.938725091 +0000 UTC m=+0.915801189 container died 8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204 (image=quay.io/ceph/ceph:v19, name=charming_galois, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 04:40:51 np0005548915 systemd[1]: var-lib-containers-storage-overlay-4d6955734231a935083861f1c21955730e3bcc9f63f3eb934bacfde6f1cae7a2-merged.mount: Deactivated successfully.
Dec  6 04:40:51 np0005548915 podman[87589]: 2025-12-06 09:40:51.992135242 +0000 UTC m=+0.969211310 container remove 8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204 (image=quay.io/ceph/ceph:v19, name=charming_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:52 np0005548915 systemd[1]: libpod-conmon-8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204.scope: Deactivated successfully.
Dec  6 04:40:52 np0005548915 python3[87682]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:52 np0005548915 podman[87683]: 2025-12-06 09:40:52.428325746 +0000 UTC m=+0.048694559 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v109: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:52 np0005548915 podman[87683]: 2025-12-06 09:40:52.745682609 +0000 UTC m=+0.366051422 container create 112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e (image=quay.io/ceph/ceph:v19, name=sleepy_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 04:40:52 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:52 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:52 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/821839877' entity='client.admin' 
Dec  6 04:40:52 np0005548915 systemd[1]: Started libpod-conmon-112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e.scope.
Dec  6 04:40:52 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:52 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5f50b60b3848cac3f0aecb3510487d465ed547958780694d0350652363be10/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:52 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5f50b60b3848cac3f0aecb3510487d465ed547958780694d0350652363be10/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:52 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5f50b60b3848cac3f0aecb3510487d465ed547958780694d0350652363be10/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:52 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec  6 04:40:52 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec  6 04:40:53 np0005548915 podman[87683]: 2025-12-06 09:40:53.238914192 +0000 UTC m=+0.859283085 container init 112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e (image=quay.io/ceph/ceph:v19, name=sleepy_greider, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:40:53 np0005548915 podman[87683]: 2025-12-06 09:40:53.247504221 +0000 UTC m=+0.867873044 container start 112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e (image=quay.io/ceph/ceph:v19, name=sleepy_greider, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:40:53 np0005548915 podman[87683]: 2025-12-06 09:40:53.25271257 +0000 UTC m=+0.873081363 container attach 112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e (image=quay.io/ceph/ceph:v19, name=sleepy_greider, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:40:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Dec  6 04:40:53 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec  6 04:40:53 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec  6 04:40:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1482144347' entity='client.admin' 
Dec  6 04:40:53 np0005548915 systemd[1]: libpod-112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e.scope: Deactivated successfully.
Dec  6 04:40:53 np0005548915 podman[87683]: 2025-12-06 09:40:53.936849284 +0000 UTC m=+1.557218067 container died 112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e (image=quay.io/ceph/ceph:v19, name=sleepy_greider, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 04:40:53 np0005548915 systemd[1]: var-lib-containers-storage-overlay-5f5f50b60b3848cac3f0aecb3510487d465ed547958780694d0350652363be10-merged.mount: Deactivated successfully.
Dec  6 04:40:53 np0005548915 podman[87683]: 2025-12-06 09:40:53.974620659 +0000 UTC m=+1.594989442 container remove 112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e (image=quay.io/ceph/ceph:v19, name=sleepy_greider, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:53 np0005548915 systemd[1]: libpod-conmon-112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e.scope: Deactivated successfully.
Dec  6 04:40:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v110: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:54 np0005548915 python3[87758]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec  6 04:40:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  6 04:40:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:40:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:40:54 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Dec  6 04:40:54 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Dec  6 04:40:54 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec  6 04:40:54 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec  6 04:40:55 np0005548915 python3[87796]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.qhdjwa/server_addr 192.168.122.100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:40:55 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1482144347' entity='client.admin' 
Dec  6 04:40:55 np0005548915 podman[87797]: 2025-12-06 09:40:55.283912817 +0000 UTC m=+0.102570318 container create 867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980 (image=quay.io/ceph/ceph:v19, name=sharp_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  6 04:40:55 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  6 04:40:55 np0005548915 podman[87797]: 2025-12-06 09:40:55.211792608 +0000 UTC m=+0.030450089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:55 np0005548915 systemd[1]: Started libpod-conmon-867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980.scope.
Dec  6 04:40:55 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97926f6dea5f8007d17b8b0b8743f5fc27c35743e12e39b52a22d2713c12445d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97926f6dea5f8007d17b8b0b8743f5fc27c35743e12e39b52a22d2713c12445d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97926f6dea5f8007d17b8b0b8743f5fc27c35743e12e39b52a22d2713c12445d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:55 np0005548915 podman[87797]: 2025-12-06 09:40:55.434703286 +0000 UTC m=+0.253360797 container init 867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980 (image=quay.io/ceph/ceph:v19, name=sharp_fermat, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:40:55 np0005548915 podman[87797]: 2025-12-06 09:40:55.441112824 +0000 UTC m=+0.259770325 container start 867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980 (image=quay.io/ceph/ceph:v19, name=sharp_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:55 np0005548915 podman[87797]: 2025-12-06 09:40:55.444729311 +0000 UTC m=+0.263386792 container attach 867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980 (image=quay.io/ceph/ceph:v19, name=sharp_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Dec  6 04:40:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.qhdjwa/server_addr}] v 0)
Dec  6 04:40:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3512142115' entity='client.admin' 
Dec  6 04:40:55 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec  6 04:40:55 np0005548915 systemd[1]: libpod-867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980.scope: Deactivated successfully.
Dec  6 04:40:55 np0005548915 podman[87797]: 2025-12-06 09:40:55.840169264 +0000 UTC m=+0.658826725 container died 867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980 (image=quay.io/ceph/ceph:v19, name=sharp_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 04:40:55 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec  6 04:40:55 np0005548915 systemd[1]: var-lib-containers-storage-overlay-97926f6dea5f8007d17b8b0b8743f5fc27c35743e12e39b52a22d2713c12445d-merged.mount: Deactivated successfully.
Dec  6 04:40:55 np0005548915 podman[87797]: 2025-12-06 09:40:55.875949285 +0000 UTC m=+0.694606746 container remove 867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980 (image=quay.io/ceph/ceph:v19, name=sharp_fermat, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:55 np0005548915 systemd[1]: libpod-conmon-867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980.scope: Deactivated successfully.
Dec  6 04:40:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v111: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:56 np0005548915 ceph-mon[74327]: Deploying daemon osd.2 on compute-2
Dec  6 04:40:56 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/3512142115' entity='client.admin' 
Dec  6 04:40:56 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec  6 04:40:56 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec  6 04:40:57 np0005548915 python3[87873]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.sauzid/server_addr 192.168.122.101#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:57 np0005548915 podman[87874]: 2025-12-06 09:40:57.227860004 +0000 UTC m=+0.071479529 container create 8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42 (image=quay.io/ceph/ceph:v19, name=romantic_hamilton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:40:57 np0005548915 systemd[1]: Started libpod-conmon-8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42.scope.
Dec  6 04:40:57 np0005548915 podman[87874]: 2025-12-06 09:40:57.196583969 +0000 UTC m=+0.040203544 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:57 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee08a90c562f9ffe70abbe78e86b997eebdda46832e62a0cde19dc0348a691b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee08a90c562f9ffe70abbe78e86b997eebdda46832e62a0cde19dc0348a691b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee08a90c562f9ffe70abbe78e86b997eebdda46832e62a0cde19dc0348a691b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:57 np0005548915 podman[87874]: 2025-12-06 09:40:57.358321334 +0000 UTC m=+0.201940849 container init 8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42 (image=quay.io/ceph/ceph:v19, name=romantic_hamilton, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:40:57 np0005548915 podman[87874]: 2025-12-06 09:40:57.369042862 +0000 UTC m=+0.212662347 container start 8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42 (image=quay.io/ceph/ceph:v19, name=romantic_hamilton, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:40:57 np0005548915 podman[87874]: 2025-12-06 09:40:57.372800804 +0000 UTC m=+0.216420319 container attach 8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42 (image=quay.io/ceph/ceph:v19, name=romantic_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  6 04:40:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.sauzid/server_addr}] v 0)
Dec  6 04:40:57 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2451230512' entity='client.admin' 
Dec  6 04:40:57 np0005548915 systemd[1]: libpod-8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42.scope: Deactivated successfully.
Dec  6 04:40:57 np0005548915 podman[87874]: 2025-12-06 09:40:57.798788597 +0000 UTC m=+0.642408122 container died 8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42 (image=quay.io/ceph/ceph:v19, name=romantic_hamilton, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  6 04:40:57 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec  6 04:40:57 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec  6 04:40:57 np0005548915 systemd[1]: var-lib-containers-storage-overlay-5ee08a90c562f9ffe70abbe78e86b997eebdda46832e62a0cde19dc0348a691b-merged.mount: Deactivated successfully.
Dec  6 04:40:57 np0005548915 podman[87874]: 2025-12-06 09:40:57.866488374 +0000 UTC m=+0.710107859 container remove 8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42 (image=quay.io/ceph/ceph:v19, name=romantic_hamilton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:40:57 np0005548915 systemd[1]: libpod-conmon-8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42.scope: Deactivated successfully.
Dec  6 04:40:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v112: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:40:58 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2451230512' entity='client.admin' 
Dec  6 04:40:58 np0005548915 python3[87951]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.oazbvn/server_addr 192.168.122.102#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:58 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec  6 04:40:58 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec  6 04:40:58 np0005548915 podman[87952]: 2025-12-06 09:40:58.832091978 +0000 UTC m=+0.059509282 container create 2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08 (image=quay.io/ceph/ceph:v19, name=inspiring_gates, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 04:40:58 np0005548915 systemd[1]: Started libpod-conmon-2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08.scope.
Dec  6 04:40:58 np0005548915 podman[87952]: 2025-12-06 09:40:58.810516756 +0000 UTC m=+0.037934100 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:58 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11acf241a7c34b80171333b6f1b5d0efd14d54885d7c12864545974d3d975ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11acf241a7c34b80171333b6f1b5d0efd14d54885d7c12864545974d3d975ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11acf241a7c34b80171333b6f1b5d0efd14d54885d7c12864545974d3d975ef/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:58 np0005548915 podman[87952]: 2025-12-06 09:40:58.92391401 +0000 UTC m=+0.151331334 container init 2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08 (image=quay.io/ceph/ceph:v19, name=inspiring_gates, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  6 04:40:58 np0005548915 podman[87952]: 2025-12-06 09:40:58.931545712 +0000 UTC m=+0.158963036 container start 2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08 (image=quay.io/ceph/ceph:v19, name=inspiring_gates, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:40:58 np0005548915 podman[87952]: 2025-12-06 09:40:58.936077464 +0000 UTC m=+0.163494788 container attach 2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08 (image=quay.io/ceph/ceph:v19, name=inspiring_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:40:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:40:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.oazbvn/server_addr}] v 0)
Dec  6 04:40:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:40:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2111286861' entity='client.admin' 
Dec  6 04:40:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:59 np0005548915 systemd[1]: libpod-2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08.scope: Deactivated successfully.
Dec  6 04:40:59 np0005548915 podman[87952]: 2025-12-06 09:40:59.392989617 +0000 UTC m=+0.620406911 container died 2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08 (image=quay.io/ceph/ceph:v19, name=inspiring_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  6 04:40:59 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d11acf241a7c34b80171333b6f1b5d0efd14d54885d7c12864545974d3d975ef-merged.mount: Deactivated successfully.
Dec  6 04:40:59 np0005548915 podman[87952]: 2025-12-06 09:40:59.432521897 +0000 UTC m=+0.659939181 container remove 2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08 (image=quay.io/ceph/ceph:v19, name=inspiring_gates, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:40:59 np0005548915 systemd[1]: libpod-conmon-2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08.scope: Deactivated successfully.
Dec  6 04:40:59 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:59 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2111286861' entity='client.admin' 
Dec  6 04:40:59 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:40:59 np0005548915 python3[88029]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:40:59 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec  6 04:40:59 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec  6 04:40:59 np0005548915 podman[88030]: 2025-12-06 09:40:59.88441484 +0000 UTC m=+0.065975207 container create 64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c (image=quay.io/ceph/ceph:v19, name=beautiful_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 04:40:59 np0005548915 systemd[1]: Started libpod-conmon-64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c.scope.
Dec  6 04:40:59 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:40:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e16728378dbea3f116fd77d129b87a4100662921e92d7eac01ac2d07aeb411e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e16728378dbea3f116fd77d129b87a4100662921e92d7eac01ac2d07aeb411e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e16728378dbea3f116fd77d129b87a4100662921e92d7eac01ac2d07aeb411e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:40:59 np0005548915 podman[88030]: 2025-12-06 09:40:59.857889151 +0000 UTC m=+0.039449598 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:40:59 np0005548915 podman[88030]: 2025-12-06 09:40:59.98310536 +0000 UTC m=+0.164665797 container init 64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c (image=quay.io/ceph/ceph:v19, name=beautiful_yalow, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  6 04:40:59 np0005548915 podman[88030]: 2025-12-06 09:40:59.992445634 +0000 UTC m=+0.174006031 container start 64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c (image=quay.io/ceph/ceph:v19, name=beautiful_yalow, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:40:59 np0005548915 podman[88030]: 2025-12-06 09:40:59.996311927 +0000 UTC m=+0.177872334 container attach 64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c (image=quay.io/ceph/ceph:v19, name=beautiful_yalow, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:41:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:41:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec  6 04:41:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2854219236' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  6 04:41:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v113: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:41:00 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2854219236' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  6 04:41:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2854219236' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  6 04:41:00 np0005548915 beautiful_yalow[88045]: module 'dashboard' is already disabled
Dec  6 04:41:00 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.qhdjwa(active, since 2m), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:00 np0005548915 systemd[1]: libpod-64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c.scope: Deactivated successfully.
Dec  6 04:41:00 np0005548915 podman[88030]: 2025-12-06 09:41:00.679096539 +0000 UTC m=+0.860656936 container died 64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c (image=quay.io/ceph/ceph:v19, name=beautiful_yalow, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:00 np0005548915 systemd[1]: var-lib-containers-storage-overlay-0e16728378dbea3f116fd77d129b87a4100662921e92d7eac01ac2d07aeb411e-merged.mount: Deactivated successfully.
Dec  6 04:41:00 np0005548915 podman[88030]: 2025-12-06 09:41:00.73102867 +0000 UTC m=+0.912589027 container remove 64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c (image=quay.io/ceph/ceph:v19, name=beautiful_yalow, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:41:00 np0005548915 systemd[1]: libpod-conmon-64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c.scope: Deactivated successfully.
Dec  6 04:41:00 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts
Dec  6 04:41:00 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok
Dec  6 04:41:01 np0005548915 python3[88105]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:01 np0005548915 podman[88106]: 2025-12-06 09:41:01.184591817 +0000 UTC m=+0.055343391 container create 2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f (image=quay.io/ceph/ceph:v19, name=brave_cartwright, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:41:01 np0005548915 systemd[1]: Started libpod-conmon-2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f.scope.
Dec  6 04:41:01 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:01 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6d0a28d9d1c91c266aeae4923e29123b3074d756b82bb6d83d21607f441dd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:01 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6d0a28d9d1c91c266aeae4923e29123b3074d756b82bb6d83d21607f441dd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:01 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6d0a28d9d1c91c266aeae4923e29123b3074d756b82bb6d83d21607f441dd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:01 np0005548915 podman[88106]: 2025-12-06 09:41:01.160147154 +0000 UTC m=+0.030898738 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:01 np0005548915 podman[88106]: 2025-12-06 09:41:01.279031792 +0000 UTC m=+0.149783396 container init 2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f (image=quay.io/ceph/ceph:v19, name=brave_cartwright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:41:01 np0005548915 podman[88106]: 2025-12-06 09:41:01.285267219 +0000 UTC m=+0.156018813 container start 2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f (image=quay.io/ceph/ceph:v19, name=brave_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 04:41:01 np0005548915 podman[88106]: 2025-12-06 09:41:01.288834702 +0000 UTC m=+0.159586366 container attach 2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f (image=quay.io/ceph/ceph:v19, name=brave_cartwright, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  6 04:41:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:41:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:41:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:01 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2854219236' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  6 04:41:01 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:01 np0005548915 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec  6 04:41:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2146703949' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  6 04:41:01 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec  6 04:41:01 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v114: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:41:02 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2146703949' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  6 04:41:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2146703949' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: mgr respawn  1: '-n'
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: mgr respawn  2: 'mgr.compute-0.qhdjwa'
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: mgr respawn  3: '-f'
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: mgr respawn  4: '--setuser'
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: mgr respawn  5: 'ceph'
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: mgr respawn  6: '--setgroup'
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: mgr respawn  7: 'ceph'
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: mgr respawn  8: '--default-log-to-file=false'
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: mgr respawn  9: '--default-log-to-journald=true'
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  6 04:41:02 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.qhdjwa(active, since 3m), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec  6 04:41:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  6 04:41:02 np0005548915 systemd[1]: libpod-2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f.scope: Deactivated successfully.
Dec  6 04:41:02 np0005548915 systemd[1]: session-33.scope: Deactivated successfully.
Dec  6 04:41:02 np0005548915 systemd[1]: session-33.scope: Consumed 25.110s CPU time.
Dec  6 04:41:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setuser ceph since I am not root
Dec  6 04:41:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setgroup ceph since I am not root
Dec  6 04:41:02 np0005548915 systemd[1]: session-32.scope: Deactivated successfully.
Dec  6 04:41:02 np0005548915 systemd[1]: session-26.scope: Deactivated successfully.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Session 33 logged out. Waiting for processes to exit.
Dec  6 04:41:02 np0005548915 systemd[1]: session-30.scope: Deactivated successfully.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Session 32 logged out. Waiting for processes to exit.
Dec  6 04:41:02 np0005548915 podman[88253]: 2025-12-06 09:41:02.869298088 +0000 UTC m=+0.081109295 container died 2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f (image=quay.io/ceph/ceph:v19, name=brave_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Session 26 logged out. Waiting for processes to exit.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Session 30 logged out. Waiting for processes to exit.
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: pidfile_write: ignore empty --pid-file
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Removed session 33.
Dec  6 04:41:02 np0005548915 systemd[1]: session-23.scope: Deactivated successfully.
Dec  6 04:41:02 np0005548915 systemd[1]: session-24.scope: Deactivated successfully.
Dec  6 04:41:02 np0005548915 systemd[1]: session-21.scope: Deactivated successfully.
Dec  6 04:41:02 np0005548915 systemd[1]: session-28.scope: Deactivated successfully.
Dec  6 04:41:02 np0005548915 systemd[1]: session-31.scope: Deactivated successfully.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Session 23 logged out. Waiting for processes to exit.
Dec  6 04:41:02 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.f deep-scrub starts
Dec  6 04:41:02 np0005548915 systemd[1]: session-27.scope: Deactivated successfully.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Session 24 logged out. Waiting for processes to exit.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Session 28 logged out. Waiting for processes to exit.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Session 31 logged out. Waiting for processes to exit.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Session 21 logged out. Waiting for processes to exit.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Session 27 logged out. Waiting for processes to exit.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Removed session 32.
Dec  6 04:41:02 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.f deep-scrub ok
Dec  6 04:41:02 np0005548915 systemd[1]: session-29.scope: Deactivated successfully.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Removed session 26.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Session 29 logged out. Waiting for processes to exit.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Removed session 30.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Removed session 23.
Dec  6 04:41:02 np0005548915 systemd[1]: session-25.scope: Deactivated successfully.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Removed session 24.
Dec  6 04:41:02 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1ec6d0a28d9d1c91c266aeae4923e29123b3074d756b82bb6d83d21607f441dd-merged.mount: Deactivated successfully.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Session 25 logged out. Waiting for processes to exit.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Removed session 21.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Removed session 28.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Removed session 31.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Removed session 27.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Removed session 29.
Dec  6 04:41:02 np0005548915 systemd-logind[795]: Removed session 25.
Dec  6 04:41:02 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'alerts'
Dec  6 04:41:02 np0005548915 podman[88253]: 2025-12-06 09:41:02.92441868 +0000 UTC m=+0.136229827 container remove 2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f (image=quay.io/ceph/ceph:v19, name=brave_cartwright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  6 04:41:02 np0005548915 systemd[1]: libpod-conmon-2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f.scope: Deactivated successfully.
Dec  6 04:41:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:03.034+0000 7fe91853c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  6 04:41:03 np0005548915 ceph-mgr[74618]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  6 04:41:03 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'balancer'
Dec  6 04:41:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:03.114+0000 7fe91853c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  6 04:41:03 np0005548915 ceph-mgr[74618]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  6 04:41:03 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'cephadm'
Dec  6 04:41:03 np0005548915 python3[88311]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:03 np0005548915 podman[88312]: 2025-12-06 09:41:03.499397834 +0000 UTC m=+0.081255729 container create 7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6 (image=quay.io/ceph/ceph:v19, name=fervent_hugle, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:41:03 np0005548915 systemd[1]: Started libpod-conmon-7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6.scope.
Dec  6 04:41:03 np0005548915 podman[88312]: 2025-12-06 09:41:03.456415416 +0000 UTC m=+0.038273351 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:03 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:03 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041babab4756ad2ecd8fdafb3c956d7adc98cb75526efa0bee1013881aca8318/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:03 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041babab4756ad2ecd8fdafb3c956d7adc98cb75526efa0bee1013881aca8318/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:03 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041babab4756ad2ecd8fdafb3c956d7adc98cb75526efa0bee1013881aca8318/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:03 np0005548915 podman[88312]: 2025-12-06 09:41:03.585713833 +0000 UTC m=+0.167571748 container init 7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6 (image=quay.io/ceph/ceph:v19, name=fervent_hugle, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:41:03 np0005548915 podman[88312]: 2025-12-06 09:41:03.598761185 +0000 UTC m=+0.180619090 container start 7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6 (image=quay.io/ceph/ceph:v19, name=fervent_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  6 04:41:03 np0005548915 podman[88312]: 2025-12-06 09:41:03.603281018 +0000 UTC m=+0.185138913 container attach 7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6 (image=quay.io/ceph/ceph:v19, name=fervent_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 04:41:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec  6 04:41:03 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2146703949' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  6 04:41:03 np0005548915 ceph-mon[74327]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  6 04:41:03 np0005548915 ceph-mon[74327]: from='osd.2 [v2:192.168.122.102:6800/709563040,v1:192.168.122.102:6801/709563040]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  6 04:41:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  6 04:41:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Dec  6 04:41:03 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Dec  6 04:41:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Dec  6 04:41:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  6 04:41:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e32 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Dec  6 04:41:03 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'crash'
Dec  6 04:41:03 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec  6 04:41:03 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec  6 04:41:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:03.927+0000 7fe91853c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  6 04:41:03 np0005548915 ceph-mgr[74618]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  6 04:41:03 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'dashboard'
Dec  6 04:41:04 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'devicehealth'
Dec  6 04:41:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:04.527+0000 7fe91853c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  6 04:41:04 np0005548915 ceph-mgr[74618]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  6 04:41:04 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'diskprediction_local'
Dec  6 04:41:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  6 04:41:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  6 04:41:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:  from numpy import show_config as show_numpy_config
Dec  6 04:41:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:04.702+0000 7fe91853c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  6 04:41:04 np0005548915 ceph-mgr[74618]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  6 04:41:04 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'influx'
Dec  6 04:41:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec  6 04:41:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec  6 04:41:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Dec  6 04:41:04 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.18( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.392019272s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 101.481727600s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.18( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.392019272s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481727600s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.15( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.088809967s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178733826s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.1f( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.082242012s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.172164917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.15( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.088809967s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178733826s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.1f( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.082242012s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.172164917s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.15( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.604486465s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.694526672s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.15( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.604486465s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.694526672s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.12( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.391054153s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 101.481193542s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.12( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.391054153s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481193542s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.11( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087738991s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.177993774s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.11( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087738991s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177993774s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.390565872s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 101.480850220s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.390565872s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.480850220s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.9( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087518692s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.177909851s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.9( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087518692s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177909851s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.e( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087523460s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.177986145s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.e( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087523460s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177986145s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.8( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087460518s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178039551s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.8( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087460518s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178039551s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.390346527s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 101.481040955s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.390346527s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481040955s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[5.4( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087246895s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178054810s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.5( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.390001297s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 101.480850220s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.5( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.390001297s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.480850220s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[5.4( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087246895s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178054810s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.1( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087553978s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178504944s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.1( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087553978s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178504944s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.9( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087446213s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178489685s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.9( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087446213s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178489685s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[5.e( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087342262s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178512573s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.1a( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087305069s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178497314s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[5.e( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087342262s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178512573s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.1a( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087305069s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178497314s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.1d( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087290764s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178657532s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.1d( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087290764s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178657532s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.1c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.383893967s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 101.475387573s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.1c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.383893967s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.475387573s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[5.1a( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087006569s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178642273s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[5.1a( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087006569s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178642273s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.1d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.383605957s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 101.475349426s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.1d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.383605957s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.475349426s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:04 np0005548915 ceph-mon[74327]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  6 04:41:04 np0005548915 ceph-mon[74327]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  6 04:41:04 np0005548915 ceph-mon[74327]: from='osd.2 [v2:192.168.122.102:6800/709563040,v1:192.168.122.102:6801/709563040]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  6 04:41:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:04.783+0000 7fe91853c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  6 04:41:04 np0005548915 ceph-mgr[74618]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  6 04:41:04 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'insights'
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec  6 04:41:04 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'iostat'
Dec  6 04:41:04 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec  6 04:41:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:04.939+0000 7fe91853c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  6 04:41:04 np0005548915 ceph-mgr[74618]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  6 04:41:04 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'k8sevents'
Dec  6 04:41:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:41:05 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'localpool'
Dec  6 04:41:05 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'mds_autoscaler'
Dec  6 04:41:05 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'mirroring'
Dec  6 04:41:05 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'nfs'
Dec  6 04:41:05 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec  6 04:41:05 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec  6 04:41:05 np0005548915 ceph-mon[74327]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec  6 04:41:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:05.982+0000 7fe91853c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  6 04:41:05 np0005548915 ceph-mgr[74618]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  6 04:41:05 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'orchestrator'
Dec  6 04:41:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:06.236+0000 7fe91853c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  6 04:41:06 np0005548915 ceph-mgr[74618]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  6 04:41:06 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'osd_perf_query'
Dec  6 04:41:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:06.314+0000 7fe91853c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  6 04:41:06 np0005548915 ceph-mgr[74618]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  6 04:41:06 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'osd_support'
Dec  6 04:41:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:06.386+0000 7fe91853c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  6 04:41:06 np0005548915 ceph-mgr[74618]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  6 04:41:06 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'pg_autoscaler'
Dec  6 04:41:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:06.469+0000 7fe91853c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  6 04:41:06 np0005548915 ceph-mgr[74618]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  6 04:41:06 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'progress'
Dec  6 04:41:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:06.538+0000 7fe91853c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  6 04:41:06 np0005548915 ceph-mgr[74618]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  6 04:41:06 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'prometheus'
Dec  6 04:41:06 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec  6 04:41:06 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec  6 04:41:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:06.885+0000 7fe91853c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  6 04:41:06 np0005548915 ceph-mgr[74618]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  6 04:41:06 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rbd_support'
Dec  6 04:41:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:06.986+0000 7fe91853c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  6 04:41:06 np0005548915 ceph-mgr[74618]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  6 04:41:06 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'restful'
Dec  6 04:41:07 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rgw'
Dec  6 04:41:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:07.403+0000 7fe91853c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  6 04:41:07 np0005548915 ceph-mgr[74618]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  6 04:41:07 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rook'
Dec  6 04:41:07 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec  6 04:41:07 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec  6 04:41:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:08.023+0000 7fe91853c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  6 04:41:08 np0005548915 ceph-mgr[74618]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  6 04:41:08 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'selftest'
Dec  6 04:41:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:08.097+0000 7fe91853c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  6 04:41:08 np0005548915 ceph-mgr[74618]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  6 04:41:08 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'snap_schedule'
Dec  6 04:41:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:08.174+0000 7fe91853c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  6 04:41:08 np0005548915 ceph-mgr[74618]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  6 04:41:08 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'stats'
Dec  6 04:41:08 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'status'
Dec  6 04:41:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:08.317+0000 7fe91853c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  6 04:41:08 np0005548915 ceph-mgr[74618]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  6 04:41:08 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'telegraf'
Dec  6 04:41:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:08.382+0000 7fe91853c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  6 04:41:08 np0005548915 ceph-mgr[74618]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  6 04:41:08 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'telemetry'
Dec  6 04:41:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:08.533+0000 7fe91853c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  6 04:41:08 np0005548915 ceph-mgr[74618]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  6 04:41:08 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'test_orchestrator'
Dec  6 04:41:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:08.747+0000 7fe91853c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  6 04:41:08 np0005548915 ceph-mgr[74618]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  6 04:41:08 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'volumes'
Dec  6 04:41:08 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn restarted
Dec  6 04:41:08 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn started
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.qhdjwa(active, since 3m), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:09.010+0000 7fe91853c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'zabbix'
Dec  6 04:41:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:09.082+0000 7fe91853c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qhdjwa restarted
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: ms_deliver_dispatch: unhandled message 0x55b59c857860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qhdjwa
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e34 e34: 3 total, 2 up, 3 in
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 2 up, 3 in
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.qhdjwa(active, starting, since 0.0389693s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr handle_mgr_map Activating!
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr handle_mgr_map I am now activating
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"} v 0)
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"}]: dispatch
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"} v 0)
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"}]: dispatch
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"} v 0)
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"}]: dispatch
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load_all_metadata Skipping incomplete metadata entry
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e1 all = 1
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: balancer
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [balancer INFO root] Starting
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Manager daemon compute-0.qhdjwa is now available
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:41:09
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: cephadm
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: crash
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: dashboard
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO access_control] Loading user roles DB version=2
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO sso] Loading SSO DB version=1
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: devicehealth
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [devicehealth INFO root] Starting
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: iostat
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: nfs
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: orchestrator
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: pg_autoscaler
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: progress
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [progress INFO root] Loading...
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] recovery thread starting
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] starting setup
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: rbd_support
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: restful
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fe8958ae1c0>, <progress.module.GhostEvent object at 0x7fe8958ae1f0>, <progress.module.GhostEvent object at 0x7fe8958ae220>, <progress.module.GhostEvent object at 0x7fe8958ae250>, <progress.module.GhostEvent object at 0x7fe8958ae280>, <progress.module.GhostEvent object at 0x7fe8958ae2b0>, <progress.module.GhostEvent object at 0x7fe8958ae2e0>, <progress.module.GhostEvent object at 0x7fe8958ae310>, <progress.module.GhostEvent object at 0x7fe8958ae340>, <progress.module.GhostEvent object at 0x7fe8958ae370>] historic events
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"} v 0)
Dec  6 04:41:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: status
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [progress INFO root] Loaded OSDMap, ready.
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [restful INFO root] server_addr: :: server_port: 8003
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: telemetry
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [restful WARNING root] server not running: no certificate configured
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: volumes
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec  6 04:41:09 np0005548915 systemd-logind[795]: New session 34 of user ceph-admin.
Dec  6 04:41:09 np0005548915 systemd[1]: Started Session 34 of User ceph-admin.
Dec  6 04:41:09 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.module] Engine started.
Dec  6 04:41:10 np0005548915 ceph-mon[74327]: Active manager daemon compute-0.qhdjwa restarted
Dec  6 04:41:10 np0005548915 ceph-mon[74327]: Activating manager daemon compute-0.qhdjwa
Dec  6 04:41:10 np0005548915 ceph-mon[74327]: Manager daemon compute-0.qhdjwa is now available
Dec  6 04:41:10 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec  6 04:41:10 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.qhdjwa(active, since 1.10991s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:10 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14313 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:41:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Dec  6 04:41:10 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/709563040; not ready for session (expect reconnect)
Dec  6 04:41:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v3: 131 pgs: 87 active+clean, 44 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:41:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  6 04:41:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  6 04:41:10 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  6 04:41:10 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid restarted
Dec  6 04:41:10 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid started
Dec  6 04:41:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:10 np0005548915 fervent_hugle[88339]: Option GRAFANA_API_USERNAME updated
Dec  6 04:41:10 np0005548915 systemd[1]: libpod-7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6.scope: Deactivated successfully.
Dec  6 04:41:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:41:10 np0005548915 podman[88549]: 2025-12-06 09:41:10.313395405 +0000 UTC m=+0.028701399 container died 7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6 (image=quay.io/ceph/ceph:v19, name=fervent_hugle, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 04:41:10 np0005548915 systemd[1]: var-lib-containers-storage-overlay-041babab4756ad2ecd8fdafb3c956d7adc98cb75526efa0bee1013881aca8318-merged.mount: Deactivated successfully.
Dec  6 04:41:10 np0005548915 podman[88549]: 2025-12-06 09:41:10.36637539 +0000 UTC m=+0.081681404 container remove 7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6 (image=quay.io/ceph/ceph:v19, name=fervent_hugle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Dec  6 04:41:10 np0005548915 systemd[1]: libpod-conmon-7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6.scope: Deactivated successfully.
Dec  6 04:41:10 np0005548915 podman[88650]: 2025-12-06 09:41:10.664011647 +0000 UTC m=+0.051338184 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 04:41:10 np0005548915 python3[88636]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Dec  6 04:41:10 np0005548915 podman[88670]: 2025-12-06 09:41:10.739128602 +0000 UTC m=+0.040441520 container create 90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f (image=quay.io/ceph/ceph:v19, name=trusting_panini, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  6 04:41:10 np0005548915 podman[88650]: 2025-12-06 09:41:10.768852251 +0000 UTC m=+0.156178758 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:10 np0005548915 systemd[1]: Started libpod-conmon-90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f.scope.
Dec  6 04:41:10 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f12ebb33d966a6447e3fa75371c8972853edd8a1190c4ab1e7a92ec04a5881/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f12ebb33d966a6447e3fa75371c8972853edd8a1190c4ab1e7a92ec04a5881/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f12ebb33d966a6447e3fa75371c8972853edd8a1190c4ab1e7a92ec04a5881/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:10 np0005548915 podman[88670]: 2025-12-06 09:41:10.719819281 +0000 UTC m=+0.021132219 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:10 np0005548915 podman[88670]: 2025-12-06 09:41:10.822019801 +0000 UTC m=+0.123332759 container init 90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f (image=quay.io/ceph/ceph:v19, name=trusting_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  6 04:41:10 np0005548915 podman[88670]: 2025-12-06 09:41:10.831666427 +0000 UTC m=+0.132979335 container start 90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f (image=quay.io/ceph/ceph:v19, name=trusting_panini, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:41:10 np0005548915 podman[88670]: 2025-12-06 09:41:10.835533799 +0000 UTC m=+0.136846777 container attach 90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f (image=quay.io/ceph/ceph:v19, name=trusting_panini, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:41:11 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:11] ENGINE Bus STARTING
Dec  6 04:41:11 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:11] ENGINE Bus STARTING
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:11 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14337 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Dec  6 04:41:11 np0005548915 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/709563040; not ready for session (expect reconnect)
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  6 04:41:11 np0005548915 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:11 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:11] ENGINE Serving on https://192.168.122.100:7150
Dec  6 04:41:11 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:11] ENGINE Serving on https://192.168.122.100:7150
Dec  6 04:41:11 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:11] ENGINE Client ('192.168.122.100', 54474) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  6 04:41:11 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:11] ENGINE Client ('192.168.122.100', 54474) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: OSD bench result of 3012.211775 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.qhdjwa(active, since 2s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/709563040,v1:192.168.122.102:6801/709563040] boot
Dec  6 04:41:11 np0005548915 trusting_panini[88692]: Option GRAFANA_API_PASSWORD updated
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  6 04:41:11 np0005548915 systemd[1]: libpod-90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f.scope: Deactivated successfully.
Dec  6 04:41:11 np0005548915 podman[88670]: 2025-12-06 09:41:11.312323549 +0000 UTC m=+0.613636487 container died 90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f (image=quay.io/ceph/ceph:v19, name=trusting_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 04:41:11 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:11] ENGINE Serving on http://192.168.122.100:8765
Dec  6 04:41:11 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:11] ENGINE Serving on http://192.168.122.100:8765
Dec  6 04:41:11 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:11] ENGINE Bus STARTED
Dec  6 04:41:11 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:11] ENGINE Bus STARTED
Dec  6 04:41:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v5: 131 pgs: 87 active+clean, 44 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  6 04:41:11 np0005548915 systemd[1]: var-lib-containers-storage-overlay-60f12ebb33d966a6447e3fa75371c8972853edd8a1190c4ab1e7a92ec04a5881-merged.mount: Deactivated successfully.
Dec  6 04:41:11 np0005548915 podman[88670]: 2025-12-06 09:41:11.358305133 +0000 UTC m=+0.659618071 container remove 90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f (image=quay.io/ceph/ceph:v19, name=trusting_panini, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Dec  6 04:41:11 np0005548915 systemd[1]: libpod-conmon-90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f.scope: Deactivated successfully.
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.1f( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.463917732s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.172164917s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.1f( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.463890076s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.172164917s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.18( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.773426533s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481727600s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.18( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.773363113s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481727600s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.470203400s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178733826s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.470160484s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178733826s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.12( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.772573471s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481193542s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.12( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.772561550s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481193542s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.772043228s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.480850220s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.772028446s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.480850220s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.469065666s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177986145s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.469051361s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177986145s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468970776s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177993774s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.8( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.469011784s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178039551s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.8( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.469001770s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178039551s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468954563s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177993774s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.15( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.985382080s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.694526672s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.9( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468731880s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177909851s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.9( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468718529s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177909851s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.15( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.985312939s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.694526672s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[5.4( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468796253s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178054810s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.771787167s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481040955s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[5.4( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468785763s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178054810s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.771754265s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481040955s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.5( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.771402836s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.480850220s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.5( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.771388054s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.480850220s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.1( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468976498s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178504944s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.1( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468938828s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178504944s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468895435s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178489685s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468878746s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178489685s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[5.e( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468868732s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178512573s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468836308s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178497314s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[5.e( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468852043s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178512573s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468826771s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178497314s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468916893s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178657532s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.1c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.765595436s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.475387573s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[5.1a( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468838692s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178642273s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.1c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.765582561s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.475387573s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[5.1a( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468828678s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178642273s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468902588s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178657532s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.1d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.765411377s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.475349426s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:41:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.1d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.765394211s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.475349426s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:41:11 np0005548915 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:41:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:11 np0005548915 python3[88894]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:11 np0005548915 podman[88910]: 2025-12-06 09:41:11.782316185 +0000 UTC m=+0.045273582 container create c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb (image=quay.io/ceph/ceph:v19, name=nervous_satoshi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Dec  6 04:41:11 np0005548915 systemd[1]: Started libpod-conmon-c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb.scope.
Dec  6 04:41:11 np0005548915 podman[88910]: 2025-12-06 09:41:11.7609849 +0000 UTC m=+0.023942297 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:11 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:11 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988532eccd19b6a5d3c60c743a657ff11fa277ebda444ac03330415b43c609f1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:11 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988532eccd19b6a5d3c60c743a657ff11fa277ebda444ac03330415b43c609f1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:11 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988532eccd19b6a5d3c60c743a657ff11fa277ebda444ac03330415b43c609f1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:11 np0005548915 podman[88910]: 2025-12-06 09:41:11.87834214 +0000 UTC m=+0.141299587 container init c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb (image=quay.io/ceph/ceph:v19, name=nervous_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  6 04:41:11 np0005548915 podman[88910]: 2025-12-06 09:41:11.891331951 +0000 UTC m=+0.154289358 container start c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb (image=quay.io/ceph/ceph:v19, name=nervous_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:41:11 np0005548915 podman[88910]: 2025-12-06 09:41:11.895912626 +0000 UTC m=+0.158870033 container attach c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb (image=quay.io/ceph/ceph:v19, name=nervous_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:41:11] ENGINE Bus STARTING
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:41:11] ENGINE Serving on https://192.168.122.100:7150
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:41:11] ENGINE Client ('192.168.122.100', 54474) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: osd.2 [v2:192.168.122.102:6800/709563040,v1:192.168.122.102:6801/709563040] boot
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:41:11] ENGINE Serving on http://192.168.122.100:8765
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:41:11] ENGINE Bus STARTED
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:12 np0005548915 nervous_satoshi[88927]: Option ALERTMANAGER_API_HOST updated
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:41:12 np0005548915 systemd[1]: libpod-c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb.scope: Deactivated successfully.
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:41:12 np0005548915 podman[88910]: 2025-12-06 09:41:12.315814858 +0000 UTC m=+0.578772225 container died c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb (image=quay.io/ceph/ceph:v19, name=nervous_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] PerfHandler: starting
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec  6 04:41:12 np0005548915 systemd[1]: var-lib-containers-storage-overlay-988532eccd19b6a5d3c60c743a657ff11fa277ebda444ac03330415b43c609f1-merged.mount: Deactivated successfully.
Dec  6 04:41:12 np0005548915 podman[88910]: 2025-12-06 09:41:12.36522951 +0000 UTC m=+0.628186887 container remove c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb (image=quay.io/ceph/ceph:v19, name=nervous_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec  6 04:41:12 np0005548915 systemd[1]: libpod-conmon-c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb.scope: Deactivated successfully.
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: images, start_after=
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TaskHandler: starting
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"} v 0)
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] setup complete
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  6 04:41:12 np0005548915 python3[89068]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:12 np0005548915 podman[89069]: 2025-12-06 09:41:12.742450284 +0000 UTC m=+0.043199357 container create 4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870 (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  6 04:41:12 np0005548915 systemd[1]: Started libpod-conmon-4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870.scope.
Dec  6 04:41:12 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:12 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d61af60dc2f8f4f92b4f8bc84db59b9cadb1ff35ec84d33f061655cb452eb60/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:12 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d61af60dc2f8f4f92b4f8bc84db59b9cadb1ff35ec84d33f061655cb452eb60/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:12 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d61af60dc2f8f4f92b4f8bc84db59b9cadb1ff35ec84d33f061655cb452eb60/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:12 np0005548915 podman[89069]: 2025-12-06 09:41:12.724809395 +0000 UTC m=+0.025558468 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:12 np0005548915 podman[89069]: 2025-12-06 09:41:12.839098808 +0000 UTC m=+0.139847891 container init 4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870 (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:12 np0005548915 podman[89069]: 2025-12-06 09:41:12.84801889 +0000 UTC m=+0.148767943 container start 4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870 (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  6 04:41:12 np0005548915 podman[89069]: 2025-12-06 09:41:12.851460889 +0000 UTC m=+0.152209952 container attach 4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870 (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.9M
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.9M
Dec  6 04:41:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  6 04:41:12 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  6 04:41:13 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14355 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:13 np0005548915 admiring_hamilton[89085]: Option PROMETHEUS_API_HOST updated
Dec  6 04:41:13 np0005548915 systemd[1]: libpod-4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870.scope: Deactivated successfully.
Dec  6 04:41:13 np0005548915 podman[89069]: 2025-12-06 09:41:13.232832094 +0000 UTC m=+0.533581147 container died 4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870 (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:41:13 np0005548915 systemd[1]: var-lib-containers-storage-overlay-3d61af60dc2f8f4f92b4f8bc84db59b9cadb1ff35ec84d33f061655cb452eb60-merged.mount: Deactivated successfully.
Dec  6 04:41:13 np0005548915 podman[89069]: 2025-12-06 09:41:13.280768739 +0000 UTC m=+0.581517792 container remove 4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870 (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:13 np0005548915 systemd[1]: libpod-conmon-4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870.scope: Deactivated successfully.
Dec  6 04:41:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v7: 131 pgs: 44 peering, 87 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:41:13 np0005548915 python3[89146]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:13 np0005548915 podman[89147]: 2025-12-06 09:41:13.840255314 +0000 UTC m=+0.058424898 container create b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405 (image=quay.io/ceph/ceph:v19, name=dazzling_chaplygin, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  6 04:41:13 np0005548915 systemd[1]: Started libpod-conmon-b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405.scope.
Dec  6 04:41:13 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc1c83bca79ca8cd8e6669acb7aef16c4b14abbc4661c8ff537a798c8da2932/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc1c83bca79ca8cd8e6669acb7aef16c4b14abbc4661c8ff537a798c8da2932/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc1c83bca79ca8cd8e6669acb7aef16c4b14abbc4661c8ff537a798c8da2932/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:13 np0005548915 podman[89147]: 2025-12-06 09:41:13.823128173 +0000 UTC m=+0.041297727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:13 np0005548915 podman[89147]: 2025-12-06 09:41:13.930162796 +0000 UTC m=+0.148332360 container init b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405 (image=quay.io/ceph/ceph:v19, name=dazzling_chaplygin, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:41:13 np0005548915 podman[89147]: 2025-12-06 09:41:13.938638983 +0000 UTC m=+0.156808527 container start b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405 (image=quay.io/ceph/ceph:v19, name=dazzling_chaplygin, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  6 04:41:13 np0005548915 podman[89147]: 2025-12-06 09:41:13.944984244 +0000 UTC m=+0.163153888 container attach b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405 (image=quay.io/ceph/ceph:v19, name=dazzling_chaplygin, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  6 04:41:13 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.qhdjwa(active, since 4s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  6 04:41:14 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Dec  6 04:41:14 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  6 04:41:14 np0005548915 ceph-mgr[74618]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  6 04:41:14 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:41:14 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  6 04:41:14 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  6 04:41:14 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  6 04:41:14 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  6 04:41:14 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  6 04:41:14 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:14 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.24160 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec  6 04:41:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:14 np0005548915 dazzling_chaplygin[89162]: Option GRAFANA_API_URL updated
Dec  6 04:41:14 np0005548915 systemd[1]: libpod-b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405.scope: Deactivated successfully.
Dec  6 04:41:14 np0005548915 podman[89147]: 2025-12-06 09:41:14.401496513 +0000 UTC m=+0.619666067 container died b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405 (image=quay.io/ceph/ceph:v19, name=dazzling_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec  6 04:41:14 np0005548915 systemd[1]: var-lib-containers-storage-overlay-fbc1c83bca79ca8cd8e6669acb7aef16c4b14abbc4661c8ff537a798c8da2932-merged.mount: Deactivated successfully.
Dec  6 04:41:14 np0005548915 podman[89147]: 2025-12-06 09:41:14.438048659 +0000 UTC m=+0.656218233 container remove b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405 (image=quay.io/ceph/ceph:v19, name=dazzling_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:41:14 np0005548915 systemd[1]: libpod-conmon-b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405.scope: Deactivated successfully.
Dec  6 04:41:14 np0005548915 python3[89347]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:14 np0005548915 podman[89380]: 2025-12-06 09:41:14.875588949 +0000 UTC m=+0.062539148 container create 002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32 (image=quay.io/ceph/ceph:v19, name=loving_cannon, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:41:14 np0005548915 systemd[1]: Started libpod-conmon-002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32.scope.
Dec  6 04:41:14 np0005548915 podman[89380]: 2025-12-06 09:41:14.847379387 +0000 UTC m=+0.034329666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:14 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dd15d42d64ac9b6802fc55783bb78af269e2726804625093cb7ce8ccb95b376/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dd15d42d64ac9b6802fc55783bb78af269e2726804625093cb7ce8ccb95b376/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dd15d42d64ac9b6802fc55783bb78af269e2726804625093cb7ce8ccb95b376/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:14 np0005548915 podman[89380]: 2025-12-06 09:41:14.975886599 +0000 UTC m=+0.162836818 container init 002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32 (image=quay.io/ceph/ceph:v19, name=loving_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 04:41:14 np0005548915 podman[89380]: 2025-12-06 09:41:14.987647781 +0000 UTC m=+0.174597990 container start 002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32 (image=quay.io/ceph/ceph:v19, name=loving_cannon, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:14 np0005548915 podman[89380]: 2025-12-06 09:41:14.991579225 +0000 UTC m=+0.178529444 container attach 002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32 (image=quay.io/ceph/ceph:v19, name=loving_cannon, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:41:15 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:15 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:15 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:15 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:15 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:15 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:41:15 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:15 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  6 04:41:15 np0005548915 ceph-mon[74327]: Adjusting osd_memory_target on compute-2 to 127.9M
Dec  6 04:41:15 np0005548915 ceph-mon[74327]: Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  6 04:41:15 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:41:15 np0005548915 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.conf
Dec  6 04:41:15 np0005548915 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.conf
Dec  6 04:41:15 np0005548915 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.conf
Dec  6 04:41:15 np0005548915 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v8: 131 pgs: 44 peering, 87 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 14 op/s
Dec  6 04:41:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec  6 04:41:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/986641805' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  6 04:41:15 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:15 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:15 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:15 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:15 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:15 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:16 np0005548915 ceph-mon[74327]: Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:16 np0005548915 ceph-mon[74327]: Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:16 np0005548915 ceph-mon[74327]: Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:16 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/986641805' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  6 04:41:16 np0005548915 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:16 np0005548915 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:16 np0005548915 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/986641805' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr respawn  1: '-n'
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr respawn  2: 'mgr.compute-0.qhdjwa'
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr respawn  3: '-f'
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr respawn  4: '--setuser'
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr respawn  5: 'ceph'
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr respawn  6: '--setgroup'
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr respawn  7: 'ceph'
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr respawn  8: '--default-log-to-file=false'
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr respawn  9: '--default-log-to-journald=true'
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr respawn  exe_path /proc/self/exe
Dec  6 04:41:16 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.qhdjwa(active, since 7s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:16 np0005548915 systemd[1]: libpod-002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32.scope: Deactivated successfully.
Dec  6 04:41:16 np0005548915 podman[89380]: 2025-12-06 09:41:16.380789496 +0000 UTC m=+1.567739695 container died 002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32 (image=quay.io/ceph/ceph:v19, name=loving_cannon, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:41:16 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1dd15d42d64ac9b6802fc55783bb78af269e2726804625093cb7ce8ccb95b376-merged.mount: Deactivated successfully.
Dec  6 04:41:16 np0005548915 podman[89380]: 2025-12-06 09:41:16.419109347 +0000 UTC m=+1.606059546 container remove 002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32 (image=quay.io/ceph/ceph:v19, name=loving_cannon, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:41:16 np0005548915 systemd[1]: libpod-conmon-002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32.scope: Deactivated successfully.
Dec  6 04:41:16 np0005548915 systemd[1]: session-34.scope: Deactivated successfully.
Dec  6 04:41:16 np0005548915 systemd[1]: session-34.scope: Consumed 4.351s CPU time.
Dec  6 04:41:16 np0005548915 systemd-logind[795]: Session 34 logged out. Waiting for processes to exit.
Dec  6 04:41:16 np0005548915 systemd-logind[795]: Removed session 34.
Dec  6 04:41:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setuser ceph since I am not root
Dec  6 04:41:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setgroup ceph since I am not root
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: pidfile_write: ignore empty --pid-file
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'alerts'
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'balancer'
Dec  6 04:41:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:16.648+0000 7f0044f10140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  6 04:41:16 np0005548915 python3[89966]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:16.725+0000 7f0044f10140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  6 04:41:16 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'cephadm'
Dec  6 04:41:16 np0005548915 podman[89967]: 2025-12-06 09:41:16.795613218 +0000 UTC m=+0.056832858 container create 1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8 (image=quay.io/ceph/ceph:v19, name=nostalgic_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 04:41:16 np0005548915 systemd[1]: Started libpod-conmon-1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8.scope.
Dec  6 04:41:16 np0005548915 podman[89967]: 2025-12-06 09:41:16.779838279 +0000 UTC m=+0.041057939 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:16 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40482bbf5ac643c9af361da303b180aeafa4a78fddb6f72c874c3529992460a4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40482bbf5ac643c9af361da303b180aeafa4a78fddb6f72c874c3529992460a4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40482bbf5ac643c9af361da303b180aeafa4a78fddb6f72c874c3529992460a4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:16 np0005548915 podman[89967]: 2025-12-06 09:41:16.905225673 +0000 UTC m=+0.166445363 container init 1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8 (image=quay.io/ceph/ceph:v19, name=nostalgic_feynman, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:41:16 np0005548915 podman[89967]: 2025-12-06 09:41:16.929601012 +0000 UTC m=+0.190820652 container start 1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8 (image=quay.io/ceph/ceph:v19, name=nostalgic_feynman, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:41:16 np0005548915 podman[89967]: 2025-12-06 09:41:16.932898947 +0000 UTC m=+0.194118647 container attach 1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8 (image=quay.io/ceph/ceph:v19, name=nostalgic_feynman, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:41:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec  6 04:41:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2772325777' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  6 04:41:17 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/986641805' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  6 04:41:17 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2772325777' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  6 04:41:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2772325777' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  6 04:41:17 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.qhdjwa(active, since 8s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:17 np0005548915 systemd[1]: libpod-1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8.scope: Deactivated successfully.
Dec  6 04:41:17 np0005548915 podman[89967]: 2025-12-06 09:41:17.420738407 +0000 UTC m=+0.681958047 container died 1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8 (image=quay.io/ceph/ceph:v19, name=nostalgic_feynman, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:41:17 np0005548915 systemd[1]: var-lib-containers-storage-overlay-40482bbf5ac643c9af361da303b180aeafa4a78fddb6f72c874c3529992460a4-merged.mount: Deactivated successfully.
Dec  6 04:41:17 np0005548915 podman[89967]: 2025-12-06 09:41:17.460833864 +0000 UTC m=+0.722053504 container remove 1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8 (image=quay.io/ceph/ceph:v19, name=nostalgic_feynman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  6 04:41:17 np0005548915 systemd[1]: libpod-conmon-1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8.scope: Deactivated successfully.
Dec  6 04:41:17 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'crash'
Dec  6 04:41:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:17.579+0000 7f0044f10140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  6 04:41:17 np0005548915 ceph-mgr[74618]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  6 04:41:17 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'dashboard'
Dec  6 04:41:18 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'devicehealth'
Dec  6 04:41:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:18.217+0000 7f0044f10140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  6 04:41:18 np0005548915 ceph-mgr[74618]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  6 04:41:18 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'diskprediction_local'
Dec  6 04:41:18 np0005548915 python3[90107]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:41:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  6 04:41:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  6 04:41:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:  from numpy import show_config as show_numpy_config
Dec  6 04:41:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:18.369+0000 7f0044f10140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  6 04:41:18 np0005548915 ceph-mgr[74618]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  6 04:41:18 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'influx'
Dec  6 04:41:18 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/2772325777' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  6 04:41:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:18.436+0000 7f0044f10140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  6 04:41:18 np0005548915 ceph-mgr[74618]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  6 04:41:18 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'insights'
Dec  6 04:41:18 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'iostat'
Dec  6 04:41:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:18.563+0000 7f0044f10140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  6 04:41:18 np0005548915 ceph-mgr[74618]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  6 04:41:18 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'k8sevents'
Dec  6 04:41:18 np0005548915 python3[90178]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014077.9581857-37343-69206165590557/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:41:18 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'localpool'
Dec  6 04:41:18 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'mds_autoscaler'
Dec  6 04:41:19 np0005548915 python3[90228]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:19 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'mirroring'
Dec  6 04:41:19 np0005548915 podman[90229]: 2025-12-06 09:41:19.298769859 +0000 UTC m=+0.050260950 container create e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2 (image=quay.io/ceph/ceph:v19, name=sweet_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  6 04:41:19 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'nfs'
Dec  6 04:41:19 np0005548915 systemd[1]: Started libpod-conmon-e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2.scope.
Dec  6 04:41:19 np0005548915 podman[90229]: 2025-12-06 09:41:19.277684473 +0000 UTC m=+0.029175594 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:19 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34b16967754a5a79087b9d7bb9fc049c609344b7371090995fd6861896bc1de/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34b16967754a5a79087b9d7bb9fc049c609344b7371090995fd6861896bc1de/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34b16967754a5a79087b9d7bb9fc049c609344b7371090995fd6861896bc1de/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:19 np0005548915 podman[90229]: 2025-12-06 09:41:19.4013035 +0000 UTC m=+0.152794611 container init e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2 (image=quay.io/ceph/ceph:v19, name=sweet_nobel, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:41:19 np0005548915 podman[90229]: 2025-12-06 09:41:19.409418737 +0000 UTC m=+0.160909838 container start e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2 (image=quay.io/ceph/ceph:v19, name=sweet_nobel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:41:19 np0005548915 podman[90229]: 2025-12-06 09:41:19.417537603 +0000 UTC m=+0.169028695 container attach e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2 (image=quay.io/ceph/ceph:v19, name=sweet_nobel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:41:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:19.548+0000 7f0044f10140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  6 04:41:19 np0005548915 ceph-mgr[74618]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  6 04:41:19 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'orchestrator'
Dec  6 04:41:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:19.758+0000 7f0044f10140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  6 04:41:19 np0005548915 ceph-mgr[74618]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  6 04:41:19 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'osd_perf_query'
Dec  6 04:41:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:19.830+0000 7f0044f10140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  6 04:41:19 np0005548915 ceph-mgr[74618]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  6 04:41:19 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'osd_support'
Dec  6 04:41:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:19.893+0000 7f0044f10140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  6 04:41:19 np0005548915 ceph-mgr[74618]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  6 04:41:19 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'pg_autoscaler'
Dec  6 04:41:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:19.967+0000 7f0044f10140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  6 04:41:19 np0005548915 ceph-mgr[74618]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  6 04:41:19 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'progress'
Dec  6 04:41:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:20.036+0000 7f0044f10140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  6 04:41:20 np0005548915 ceph-mgr[74618]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  6 04:41:20 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'prometheus'
Dec  6 04:41:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:41:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:20.368+0000 7f0044f10140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  6 04:41:20 np0005548915 ceph-mgr[74618]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  6 04:41:20 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rbd_support'
Dec  6 04:41:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:20.476+0000 7f0044f10140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  6 04:41:20 np0005548915 ceph-mgr[74618]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  6 04:41:20 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'restful'
Dec  6 04:41:20 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rgw'
Dec  6 04:41:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:20.922+0000 7f0044f10140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  6 04:41:20 np0005548915 ceph-mgr[74618]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  6 04:41:20 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rook'
Dec  6 04:41:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:21.476+0000 7f0044f10140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  6 04:41:21 np0005548915 ceph-mgr[74618]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  6 04:41:21 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'selftest'
Dec  6 04:41:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:21.553+0000 7f0044f10140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  6 04:41:21 np0005548915 ceph-mgr[74618]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  6 04:41:21 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'snap_schedule'
Dec  6 04:41:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:21.641+0000 7f0044f10140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  6 04:41:21 np0005548915 ceph-mgr[74618]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  6 04:41:21 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'stats'
Dec  6 04:41:21 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'status'
Dec  6 04:41:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:21.799+0000 7f0044f10140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  6 04:41:21 np0005548915 ceph-mgr[74618]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  6 04:41:21 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'telegraf'
Dec  6 04:41:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:21.870+0000 7f0044f10140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  6 04:41:21 np0005548915 ceph-mgr[74618]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  6 04:41:21 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'telemetry'
Dec  6 04:41:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:22.031+0000 7f0044f10140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'test_orchestrator'
Dec  6 04:41:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:22.280+0000 7f0044f10140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'volumes'
Dec  6 04:41:22 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn restarted
Dec  6 04:41:22 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn started
Dec  6 04:41:22 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.qhdjwa(active, since 13s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:22.553+0000 7f0044f10140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'zabbix'
Dec  6 04:41:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:22.643+0000 7f0044f10140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  6 04:41:22 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qhdjwa restarted
Dec  6 04:41:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec  6 04:41:22 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qhdjwa
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: ms_deliver_dispatch: unhandled message 0x559abcb13860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr respawn  1: '-n'
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr respawn  2: 'mgr.compute-0.qhdjwa'
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr respawn  3: '-f'
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr respawn  4: '--setuser'
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr respawn  5: 'ceph'
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr respawn  6: '--setgroup'
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr respawn  7: 'ceph'
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr respawn  8: '--default-log-to-file=false'
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr respawn  9: '--default-log-to-journald=true'
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  6 04:41:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec  6 04:41:22 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec  6 04:41:22 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.qhdjwa(active, starting, since 0.0444664s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setuser ceph since I am not root
Dec  6 04:41:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setgroup ceph since I am not root
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: pidfile_write: ignore empty --pid-file
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'alerts'
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'balancer'
Dec  6 04:41:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:22.895+0000 7f5366bee140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  6 04:41:22 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'cephadm'
Dec  6 04:41:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:22.977+0000 7f5366bee140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  6 04:41:23 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid restarted
Dec  6 04:41:23 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid started
Dec  6 04:41:23 np0005548915 ceph-mon[74327]: Active manager daemon compute-0.qhdjwa restarted
Dec  6 04:41:23 np0005548915 ceph-mon[74327]: Activating manager daemon compute-0.qhdjwa
Dec  6 04:41:23 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.qhdjwa(active, starting, since 1.05925s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:23 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'crash'
Dec  6 04:41:23 np0005548915 ceph-mgr[74618]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  6 04:41:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:23.832+0000 7f5366bee140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  6 04:41:23 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'dashboard'
Dec  6 04:41:24 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'devicehealth'
Dec  6 04:41:24 np0005548915 ceph-mgr[74618]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  6 04:41:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:24.446+0000 7f5366bee140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  6 04:41:24 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'diskprediction_local'
Dec  6 04:41:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  6 04:41:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  6 04:41:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:  from numpy import show_config as show_numpy_config
Dec  6 04:41:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:24.611+0000 7f5366bee140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  6 04:41:24 np0005548915 ceph-mgr[74618]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  6 04:41:24 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'influx'
Dec  6 04:41:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:24.681+0000 7f5366bee140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  6 04:41:24 np0005548915 ceph-mgr[74618]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  6 04:41:24 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'insights'
Dec  6 04:41:24 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'iostat'
Dec  6 04:41:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:24.809+0000 7f5366bee140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  6 04:41:24 np0005548915 ceph-mgr[74618]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  6 04:41:24 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'k8sevents'
Dec  6 04:41:25 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'localpool'
Dec  6 04:41:25 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'mds_autoscaler'
Dec  6 04:41:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:41:25 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'mirroring'
Dec  6 04:41:25 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'nfs'
Dec  6 04:41:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:25.825+0000 7f5366bee140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  6 04:41:25 np0005548915 ceph-mgr[74618]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  6 04:41:25 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'orchestrator'
Dec  6 04:41:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:26.053+0000 7f5366bee140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  6 04:41:26 np0005548915 ceph-mgr[74618]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  6 04:41:26 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'osd_perf_query'
Dec  6 04:41:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:26.134+0000 7f5366bee140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  6 04:41:26 np0005548915 ceph-mgr[74618]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  6 04:41:26 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'osd_support'
Dec  6 04:41:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:26.203+0000 7f5366bee140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  6 04:41:26 np0005548915 ceph-mgr[74618]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  6 04:41:26 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'pg_autoscaler'
Dec  6 04:41:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:26.281+0000 7f5366bee140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  6 04:41:26 np0005548915 ceph-mgr[74618]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  6 04:41:26 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'progress'
Dec  6 04:41:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:26.364+0000 7f5366bee140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  6 04:41:26 np0005548915 ceph-mgr[74618]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  6 04:41:26 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'prometheus'
Dec  6 04:41:26 np0005548915 systemd[1]: Stopping User Manager for UID 42477...
Dec  6 04:41:26 np0005548915 systemd[75653]: Activating special unit Exit the Session...
Dec  6 04:41:26 np0005548915 systemd[75653]: Stopped target Main User Target.
Dec  6 04:41:26 np0005548915 systemd[75653]: Stopped target Basic System.
Dec  6 04:41:26 np0005548915 systemd[75653]: Stopped target Paths.
Dec  6 04:41:26 np0005548915 systemd[75653]: Stopped target Sockets.
Dec  6 04:41:26 np0005548915 systemd[75653]: Stopped target Timers.
Dec  6 04:41:26 np0005548915 systemd[75653]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  6 04:41:26 np0005548915 systemd[75653]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  6 04:41:26 np0005548915 systemd[75653]: Closed D-Bus User Message Bus Socket.
Dec  6 04:41:26 np0005548915 systemd[75653]: Stopped Create User's Volatile Files and Directories.
Dec  6 04:41:26 np0005548915 systemd[75653]: Removed slice User Application Slice.
Dec  6 04:41:26 np0005548915 systemd[75653]: Reached target Shutdown.
Dec  6 04:41:26 np0005548915 systemd[75653]: Finished Exit the Session.
Dec  6 04:41:26 np0005548915 systemd[75653]: Reached target Exit the Session.
Dec  6 04:41:26 np0005548915 systemd[1]: user@42477.service: Deactivated successfully.
Dec  6 04:41:26 np0005548915 systemd[1]: Stopped User Manager for UID 42477.
Dec  6 04:41:26 np0005548915 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  6 04:41:26 np0005548915 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  6 04:41:26 np0005548915 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  6 04:41:26 np0005548915 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  6 04:41:26 np0005548915 systemd[1]: Removed slice User Slice of UID 42477.
Dec  6 04:41:26 np0005548915 systemd[1]: user-42477.slice: Consumed 31.408s CPU time.
Dec  6 04:41:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:26.727+0000 7f5366bee140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  6 04:41:26 np0005548915 ceph-mgr[74618]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  6 04:41:26 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rbd_support'
Dec  6 04:41:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:26.818+0000 7f5366bee140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  6 04:41:26 np0005548915 ceph-mgr[74618]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  6 04:41:26 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'restful'
Dec  6 04:41:27 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rgw'
Dec  6 04:41:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:27.218+0000 7f5366bee140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  6 04:41:27 np0005548915 ceph-mgr[74618]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  6 04:41:27 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rook'
Dec  6 04:41:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:27.784+0000 7f5366bee140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  6 04:41:27 np0005548915 ceph-mgr[74618]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  6 04:41:27 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'selftest'
Dec  6 04:41:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:27.849+0000 7f5366bee140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  6 04:41:27 np0005548915 ceph-mgr[74618]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  6 04:41:27 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'snap_schedule'
Dec  6 04:41:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:27.921+0000 7f5366bee140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  6 04:41:27 np0005548915 ceph-mgr[74618]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  6 04:41:27 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'stats'
Dec  6 04:41:27 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'status'
Dec  6 04:41:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:28.063+0000 7f5366bee140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'telegraf'
Dec  6 04:41:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:28.132+0000 7f5366bee140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'telemetry'
Dec  6 04:41:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:28.286+0000 7f5366bee140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'test_orchestrator'
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn restarted
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn started
Dec  6 04:41:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:28.515+0000 7f5366bee140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'volumes'
Dec  6 04:41:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:28.783+0000 7f5366bee140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'zabbix'
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.qhdjwa(active, starting, since 6s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:28.891+0000 7f5366bee140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qhdjwa restarted
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qhdjwa
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: ms_deliver_dispatch: unhandled message 0x56090a73d860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr handle_mgr_map Activating!
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr handle_mgr_map I am now activating
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.qhdjwa(active, starting, since 0.0265623s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"} v 0)
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"}]: dispatch
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"} v 0)
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"}]: dispatch
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"} v 0)
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"}]: dispatch
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e1 all = 1
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Manager daemon compute-0.qhdjwa is now available
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: balancer
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [balancer INFO root] Starting
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:41:28
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: cephadm
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: crash
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: dashboard
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [dashboard INFO access_control] Loading user roles DB version=2
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [dashboard INFO sso] Loading SSO DB version=1
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: devicehealth
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: iostat
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [devicehealth INFO root] Starting
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: nfs
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: orchestrator
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: pg_autoscaler
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: progress
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [progress INFO root] Loading...
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f52e9783c40>, <progress.module.GhostEvent object at 0x7f52e9783f10>, <progress.module.GhostEvent object at 0x7f52e9783f40>, <progress.module.GhostEvent object at 0x7f52e9783f70>, <progress.module.GhostEvent object at 0x7f52e9783fa0>, <progress.module.GhostEvent object at 0x7f52e9783fd0>, <progress.module.GhostEvent object at 0x7f52e979d040>, <progress.module.GhostEvent object at 0x7f52e979d070>, <progress.module.GhostEvent object at 0x7f52e979d0a0>, <progress.module.GhostEvent object at 0x7f52e979d0d0>] historic events
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [progress INFO root] Loaded OSDMap, ready.
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] recovery thread starting
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] starting setup
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: rbd_support
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"} v 0)
Dec  6 04:41:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:28 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: restful
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [restful INFO root] server_addr: :: server_port: 8003
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: status
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [restful WARNING root] server not running: no certificate configured
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: telemetry
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] PerfHandler: starting
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: images, start_after=
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TaskHandler: starting
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"} v 0)
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: volumes
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] setup complete
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec  6 04:41:29 np0005548915 systemd-logind[795]: New session 35 of user ceph-admin.
Dec  6 04:41:29 np0005548915 systemd[1]: Created slice User Slice of UID 42477.
Dec  6 04:41:29 np0005548915 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  6 04:41:29 np0005548915 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.module] Engine started.
Dec  6 04:41:29 np0005548915 systemd[1]: Starting User Manager for UID 42477...
Dec  6 04:41:29 np0005548915 systemd[90433]: Queued start job for default target Main User Target.
Dec  6 04:41:29 np0005548915 systemd[90433]: Created slice User Application Slice.
Dec  6 04:41:29 np0005548915 systemd[90433]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  6 04:41:29 np0005548915 systemd[90433]: Started Daily Cleanup of User's Temporary Directories.
Dec  6 04:41:29 np0005548915 systemd[90433]: Reached target Paths.
Dec  6 04:41:29 np0005548915 systemd[90433]: Reached target Timers.
Dec  6 04:41:29 np0005548915 systemd[90433]: Starting D-Bus User Message Bus Socket...
Dec  6 04:41:29 np0005548915 systemd[90433]: Starting Create User's Volatile Files and Directories...
Dec  6 04:41:29 np0005548915 systemd[90433]: Finished Create User's Volatile Files and Directories.
Dec  6 04:41:29 np0005548915 systemd[90433]: Listening on D-Bus User Message Bus Socket.
Dec  6 04:41:29 np0005548915 systemd[90433]: Reached target Sockets.
Dec  6 04:41:29 np0005548915 systemd[90433]: Reached target Basic System.
Dec  6 04:41:29 np0005548915 systemd[90433]: Reached target Main User Target.
Dec  6 04:41:29 np0005548915 systemd[90433]: Startup finished in 122ms.
Dec  6 04:41:29 np0005548915 systemd[1]: Started User Manager for UID 42477.
Dec  6 04:41:29 np0005548915 systemd[1]: Started Session 35 of User ceph-admin.
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: Active manager daemon compute-0.qhdjwa restarted
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: Activating manager daemon compute-0.qhdjwa
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: Manager daemon compute-0.qhdjwa is now available
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.qhdjwa(active, since 1.06111s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14391 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v3: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  6 04:41:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0[74323]: 2025-12-06T09:41:29.966+0000 7f374f329640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e2 new map
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2025-12-06T09:41:29:967825+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T09:41:29.967778+0000#012modified#0112025-12-06T09:41:29.967778+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  6 04:41:29 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  6 04:41:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:30 np0005548915 ceph-mgr[74618]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  6 04:41:30 np0005548915 systemd[1]: libpod-e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2.scope: Deactivated successfully.
Dec  6 04:41:30 np0005548915 podman[90511]: 2025-12-06 09:41:30.09590233 +0000 UTC m=+0.039280502 container died e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2 (image=quay.io/ceph/ceph:v19, name=sweet_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:41:30 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:30] ENGINE Bus STARTING
Dec  6 04:41:30 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:30] ENGINE Bus STARTING
Dec  6 04:41:30 np0005548915 systemd[1]: var-lib-containers-storage-overlay-b34b16967754a5a79087b9d7bb9fc049c609344b7371090995fd6861896bc1de-merged.mount: Deactivated successfully.
Dec  6 04:41:30 np0005548915 podman[90511]: 2025-12-06 09:41:30.149915808 +0000 UTC m=+0.093293980 container remove e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2 (image=quay.io/ceph/ceph:v19, name=sweet_nobel, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  6 04:41:30 np0005548915 systemd[1]: libpod-conmon-e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2.scope: Deactivated successfully.
Dec  6 04:41:30 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:30] ENGINE Serving on https://192.168.122.100:7150
Dec  6 04:41:30 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:30] ENGINE Serving on https://192.168.122.100:7150
Dec  6 04:41:30 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:30] ENGINE Client ('192.168.122.100', 59318) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  6 04:41:30 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:30] ENGINE Client ('192.168.122.100', 59318) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid restarted
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid started
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:41:30 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:30] ENGINE Serving on http://192.168.122.100:8765
Dec  6 04:41:30 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:30] ENGINE Serving on http://192.168.122.100:8765
Dec  6 04:41:30 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:30] ENGINE Bus STARTED
Dec  6 04:41:30 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:30] ENGINE Bus STARTED
Dec  6 04:41:30 np0005548915 podman[90633]: 2025-12-06 09:41:30.48944357 +0000 UTC m=+0.072556515 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:41:30 np0005548915 python3[90618]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:30 np0005548915 podman[90656]: 2025-12-06 09:41:30.559567706 +0000 UTC m=+0.055044331 container create 1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4 (image=quay.io/ceph/ceph:v19, name=great_curie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 04:41:30 np0005548915 podman[90633]: 2025-12-06 09:41:30.585925339 +0000 UTC m=+0.169038244 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 04:41:30 np0005548915 systemd[1]: Started libpod-conmon-1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4.scope.
Dec  6 04:41:30 np0005548915 podman[90656]: 2025-12-06 09:41:30.529468665 +0000 UTC m=+0.024945330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:41:30 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:30 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/911e2a99adf09573c6a23d7c2cd13a5492054047244274e562403b36ae89a199/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:30 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/911e2a99adf09573c6a23d7c2cd13a5492054047244274e562403b36ae89a199/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:30 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/911e2a99adf09573c6a23d7c2cd13a5492054047244274e562403b36ae89a199/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:41:30 np0005548915 podman[90656]: 2025-12-06 09:41:30.656172179 +0000 UTC m=+0.151648784 container init 1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4 (image=quay.io/ceph/ceph:v19, name=great_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:30 np0005548915 podman[90656]: 2025-12-06 09:41:30.663989937 +0000 UTC m=+0.159466522 container start 1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4 (image=quay.io/ceph/ceph:v19, name=great_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:41:30 np0005548915 podman[90656]: 2025-12-06 09:41:30.667737945 +0000 UTC m=+0.163214530 container attach 1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4 (image=quay.io/ceph/ceph:v19, name=great_curie, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v5: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:41:30] ENGINE Bus STARTING
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:41:30] ENGINE Serving on https://192.168.122.100:7150
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:41:30] ENGINE Client ('192.168.122.100', 59318) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:41:30] ENGINE Serving on http://192.168.122.100:8765
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:41:30] ENGINE Bus STARTED
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:41:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14424 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:41:31 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  6 04:41:31 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.qhdjwa(active, since 2s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:31 np0005548915 great_curie[90681]: Scheduled mds.cephfs update...
Dec  6 04:41:31 np0005548915 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec  6 04:41:31 np0005548915 systemd[1]: libpod-1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4.scope: Deactivated successfully.
Dec  6 04:41:31 np0005548915 podman[90656]: 2025-12-06 09:41:31.080019477 +0000 UTC m=+0.575496062 container died 1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4 (image=quay.io/ceph/ceph:v19, name=great_curie, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 04:41:31 np0005548915 systemd[1]: var-lib-containers-storage-overlay-911e2a99adf09573c6a23d7c2cd13a5492054047244274e562403b36ae89a199-merged.mount: Deactivated successfully.
Dec  6 04:41:31 np0005548915 podman[90656]: 2025-12-06 09:41:31.121917712 +0000 UTC m=+0.617394307 container remove 1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4 (image=quay.io/ceph/ceph:v19, name=great_curie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  6 04:41:31 np0005548915 systemd[1]: libpod-conmon-1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4.scope: Deactivated successfully.
Dec  6 04:41:31 np0005548915 python3[90871]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:31 np0005548915 podman[90879]: 2025-12-06 09:41:31.511937709 +0000 UTC m=+0.037011940 container create 632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1 (image=quay.io/ceph/ceph:v19, name=infallible_jang, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:41:31 np0005548915 systemd[1]: Started libpod-conmon-632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1.scope.
Dec  6 04:41:31 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec05436ebdb698f5e908365bdbba6fad3ff748ad35383fd102d8ab48ac0daa5b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec05436ebdb698f5e908365bdbba6fad3ff748ad35383fd102d8ab48ac0daa5b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec05436ebdb698f5e908365bdbba6fad3ff748ad35383fd102d8ab48ac0daa5b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:31 np0005548915 podman[90879]: 2025-12-06 09:41:31.495000304 +0000 UTC m=+0.020074565 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:31 np0005548915 podman[90879]: 2025-12-06 09:41:31.599329792 +0000 UTC m=+0.124404083 container init 632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1 (image=quay.io/ceph/ceph:v19, name=infallible_jang, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:41:31 np0005548915 podman[90879]: 2025-12-06 09:41:31.606659494 +0000 UTC m=+0.131733755 container start 632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1 (image=quay.io/ceph/ceph:v19, name=infallible_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  6 04:41:31 np0005548915 podman[90879]: 2025-12-06 09:41:31.611457765 +0000 UTC m=+0.136532026 container attach 632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1 (image=quay.io/ceph/ceph:v19, name=infallible_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  6 04:41:31 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Dec  6 04:41:31 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  6 04:41:31 np0005548915 ceph-mgr[74618]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  6 04:41:31 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:31 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14433 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.9M
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.9M
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  6 04:41:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v6: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: Adjusting osd_memory_target on compute-2 to 127.9M
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.conf
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.conf
Dec  6 04:41:32 np0005548915 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.conf
Dec  6 04:41:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec  6 04:41:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec  6 04:41:33 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec  6 04:41:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Dec  6 04:41:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec  6 04:41:33 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:33 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:33 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:33 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:33 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:33 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:33 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.qhdjwa(active, since 4s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:33 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:33 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:33 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:33 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:34 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:34 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:34 np0005548915 ceph-mgr[74618]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Dec  6 04:41:34 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  6 04:41:34 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:34 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  6 04:41:34 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:34 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:41:34 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:41:34 np0005548915 systemd[1]: libpod-632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1.scope: Deactivated successfully.
Dec  6 04:41:34 np0005548915 podman[91606]: 2025-12-06 09:41:34.177503074 +0000 UTC m=+0.024787735 container died 632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1 (image=quay.io/ceph/ceph:v19, name=infallible_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:41:34 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ec05436ebdb698f5e908365bdbba6fad3ff748ad35383fd102d8ab48ac0daa5b-merged.mount: Deactivated successfully.
Dec  6 04:41:34 np0005548915 podman[91606]: 2025-12-06 09:41:34.217978273 +0000 UTC m=+0.065262914 container remove 632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1 (image=quay.io/ceph/ceph:v19, name=infallible_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 04:41:34 np0005548915 systemd[1]: libpod-conmon-632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1.scope: Deactivated successfully.
Dec  6 04:41:34 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:41:34 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:34 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:41:34 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:41:34 np0005548915 python3[91931]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:41:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v9: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 14 op/s
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:41:35 np0005548915 python3[92066]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014094.6265876-37374-18139629387268/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=944de880f37676f80f6e04a4864888bf3f7decbf backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:35 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 718093b7-ae24-4ca4-868b-ad896e0c544f (Updating node-exporter deployment (+3 -> 3))
Dec  6 04:41:35 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Dec  6 04:41:35 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Dec  6 04:41:35 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.qhdjwa(active, since 6s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:41:35 np0005548915 python3[92166]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:35 np0005548915 podman[92190]: 2025-12-06 09:41:35.946541431 +0000 UTC m=+0.056939672 container create 7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7 (image=quay.io/ceph/ceph:v19, name=eager_gauss, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:41:35 np0005548915 systemd[1]: Started libpod-conmon-7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7.scope.
Dec  6 04:41:35 np0005548915 systemd[1]: Reloading.
Dec  6 04:41:36 np0005548915 podman[92190]: 2025-12-06 09:41:35.923903045 +0000 UTC m=+0.034301306 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:36 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:41:36 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:41:36 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:36 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:36 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:36 np0005548915 ceph-mon[74327]: Deploying daemon node-exporter.compute-0 on compute-0
Dec  6 04:41:36 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:36 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/590daf0929948d12f51863f1bed825aa95e20812ca7a613eadc2e28f1ca041df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:36 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/590daf0929948d12f51863f1bed825aa95e20812ca7a613eadc2e28f1ca041df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:36 np0005548915 podman[92190]: 2025-12-06 09:41:36.248914198 +0000 UTC m=+0.359312459 container init 7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7 (image=quay.io/ceph/ceph:v19, name=eager_gauss, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  6 04:41:36 np0005548915 podman[92190]: 2025-12-06 09:41:36.257713426 +0000 UTC m=+0.368111667 container start 7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7 (image=quay.io/ceph/ceph:v19, name=eager_gauss, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:41:36 np0005548915 podman[92190]: 2025-12-06 09:41:36.26130255 +0000 UTC m=+0.371700801 container attach 7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7 (image=quay.io/ceph/ceph:v19, name=eager_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec  6 04:41:36 np0005548915 systemd[1]: Reloading.
Dec  6 04:41:36 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:41:36 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:41:36 np0005548915 systemd[1]: Starting Ceph node-exporter.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:41:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Dec  6 04:41:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/351927990' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  6 04:41:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/351927990' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  6 04:41:36 np0005548915 systemd[1]: libpod-7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7.scope: Deactivated successfully.
Dec  6 04:41:36 np0005548915 podman[92190]: 2025-12-06 09:41:36.7450313 +0000 UTC m=+0.855429581 container died 7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7 (image=quay.io/ceph/ceph:v19, name=eager_gauss, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:41:36 np0005548915 systemd[1]: var-lib-containers-storage-overlay-590daf0929948d12f51863f1bed825aa95e20812ca7a613eadc2e28f1ca041df-merged.mount: Deactivated successfully.
Dec  6 04:41:36 np0005548915 podman[92190]: 2025-12-06 09:41:36.791920262 +0000 UTC m=+0.902318523 container remove 7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7 (image=quay.io/ceph/ceph:v19, name=eager_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Dec  6 04:41:36 np0005548915 systemd[1]: libpod-conmon-7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7.scope: Deactivated successfully.
Dec  6 04:41:36 np0005548915 bash[92371]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Dec  6 04:41:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v11: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec  6 04:41:37 np0005548915 bash[92371]: Getting image source signatures
Dec  6 04:41:37 np0005548915 bash[92371]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Dec  6 04:41:37 np0005548915 bash[92371]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Dec  6 04:41:37 np0005548915 bash[92371]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Dec  6 04:41:37 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/351927990' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  6 04:41:37 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/351927990' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  6 04:41:37 np0005548915 python3[92464]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:37 np0005548915 bash[92371]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Dec  6 04:41:37 np0005548915 bash[92371]: Writing manifest to image destination
Dec  6 04:41:37 np0005548915 podman[92466]: 2025-12-06 09:41:37.757621796 +0000 UTC m=+0.145972405 container create 74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8 (image=quay.io/ceph/ceph:v19, name=adoring_haibt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:41:37 np0005548915 podman[92371]: 2025-12-06 09:41:37.789635938 +0000 UTC m=+1.036713850 container create 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:41:37 np0005548915 systemd[1]: Started libpod-conmon-74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8.scope.
Dec  6 04:41:37 np0005548915 podman[92371]: 2025-12-06 09:41:37.77323374 +0000 UTC m=+1.020311682 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec  6 04:41:37 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddccd74815bd653ea0db1a678ef9c01e55697a06a91ab1f9e3536113257628cf/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099bd82de2037c329caa3e9388cd4eebd587128552f3c7ab50078257366b7227/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099bd82de2037c329caa3e9388cd4eebd587128552f3c7ab50078257366b7227/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:37 np0005548915 podman[92371]: 2025-12-06 09:41:37.830598803 +0000 UTC m=+1.077676735 container init 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:41:37 np0005548915 podman[92466]: 2025-12-06 09:41:37.836641953 +0000 UTC m=+0.224992592 container init 74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8 (image=quay.io/ceph/ceph:v19, name=adoring_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:37 np0005548915 podman[92371]: 2025-12-06 09:41:37.836945784 +0000 UTC m=+1.084023696 container start 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:41:37 np0005548915 podman[92466]: 2025-12-06 09:41:37.741455546 +0000 UTC m=+0.129806185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:37 np0005548915 bash[92371]: 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333
Dec  6 04:41:37 np0005548915 podman[92466]: 2025-12-06 09:41:37.842942163 +0000 UTC m=+0.231292782 container start 74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8 (image=quay.io/ceph/ceph:v19, name=adoring_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.845Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.845Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.846Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.846Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  6 04:41:37 np0005548915 systemd[1]: Started Ceph node-exporter.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:41:37 np0005548915 podman[92466]: 2025-12-06 09:41:37.846713412 +0000 UTC m=+0.235064041 container attach 74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8 (image=quay.io/ceph/ceph:v19, name=adoring_haibt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.850Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.850Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.850Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.850Z caller=node_exporter.go:117 level=info collector=arp
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.850Z caller=node_exporter.go:117 level=info collector=bcache
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.850Z caller=node_exporter.go:117 level=info collector=bonding
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=cpu
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=dmi
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=edac
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=entropy
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=filefd
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=hwmon
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=netclass
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=netdev
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=netstat
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=nfs
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=nvme
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=os
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=pressure
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=rapl
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=selinux
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=softnet
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=stat
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=textfile
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=time
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=uname
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=xfs
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=zfs
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.853Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec  6 04:41:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.853Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec  6 04:41:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:41:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:41:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  6 04:41:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:38 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Dec  6 04:41:38 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Dec  6 04:41:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  6 04:41:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/824556430' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  6 04:41:38 np0005548915 adoring_haibt[92495]: 
Dec  6 04:41:38 np0005548915 adoring_haibt[92495]: {"fsid":"5ecd3f74-dade-5fc4-92ce-8950ae424258","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":72,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1765014071,"num_in_osds":3,"osd_in_since":1765014049,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":132}],"num_pgs":132,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":84193280,"bytes_avail":64327733248,"bytes_total":64411926528,"read_bytes_sec":29820,"write_bytes_sec":0,"read_op_per_sec":9,"write_op_per_sec":2},"fsmap":{"epoch":2,"btime":"2025-12-06T09:41:29:967825+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":4,"modified":"2025-12-06T09:40:50.551863+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.sauzid":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.oazbvn":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"718093b7-ae24-4ca4-868b-ad896e0c544f":{"message":"Updating node-exporter deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec  6 04:41:38 np0005548915 systemd[1]: libpod-74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8.scope: Deactivated successfully.
Dec  6 04:41:38 np0005548915 podman[92466]: 2025-12-06 09:41:38.500563769 +0000 UTC m=+0.888914418 container died 74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8 (image=quay.io/ceph/ceph:v19, name=adoring_haibt, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:41:38 np0005548915 systemd[1]: var-lib-containers-storage-overlay-099bd82de2037c329caa3e9388cd4eebd587128552f3c7ab50078257366b7227-merged.mount: Deactivated successfully.
Dec  6 04:41:38 np0005548915 podman[92466]: 2025-12-06 09:41:38.552562833 +0000 UTC m=+0.940913452 container remove 74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8 (image=quay.io/ceph/ceph:v19, name=adoring_haibt, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  6 04:41:38 np0005548915 systemd[1]: libpod-conmon-74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8.scope: Deactivated successfully.
Dec  6 04:41:38 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:38 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:38 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:38 np0005548915 python3[92566]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:38 np0005548915 podman[92567]: 2025-12-06 09:41:38.934979361 +0000 UTC m=+0.042707512 container create 65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f (image=quay.io/ceph/ceph:v19, name=nostalgic_sutherland, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 04:41:38 np0005548915 systemd[1]: Started libpod-conmon-65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f.scope.
Dec  6 04:41:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v12: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec  6 04:41:39 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:39 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6658d303c14ec7d0de36ee661266a09745ad324cdc6232a6935fddf50716fc6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:39 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6658d303c14ec7d0de36ee661266a09745ad324cdc6232a6935fddf50716fc6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:39 np0005548915 podman[92567]: 2025-12-06 09:41:38.918927623 +0000 UTC m=+0.026655774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:39 np0005548915 podman[92567]: 2025-12-06 09:41:39.027213196 +0000 UTC m=+0.134941427 container init 65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f (image=quay.io/ceph/ceph:v19, name=nostalgic_sutherland, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:41:39 np0005548915 podman[92567]: 2025-12-06 09:41:39.034565028 +0000 UTC m=+0.142293219 container start 65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f (image=quay.io/ceph/ceph:v19, name=nostalgic_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  6 04:41:39 np0005548915 podman[92567]: 2025-12-06 09:41:39.038209233 +0000 UTC m=+0.145937464 container attach 65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f (image=quay.io/ceph/ceph:v19, name=nostalgic_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:41:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 04:41:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/917045225' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 04:41:39 np0005548915 nostalgic_sutherland[92583]: 
Dec  6 04:41:39 np0005548915 nostalgic_sutherland[92583]: {"epoch":3,"fsid":"5ecd3f74-dade-5fc4-92ce-8950ae424258","modified":"2025-12-06T09:40:20.714037Z","created":"2025-12-06T09:37:38.663870Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Dec  6 04:41:39 np0005548915 nostalgic_sutherland[92583]: dumped monmap epoch 3
Dec  6 04:41:39 np0005548915 systemd[1]: libpod-65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f.scope: Deactivated successfully.
Dec  6 04:41:39 np0005548915 podman[92608]: 2025-12-06 09:41:39.58036144 +0000 UTC m=+0.020914132 container died 65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f (image=quay.io/ceph/ceph:v19, name=nostalgic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  6 04:41:39 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a6658d303c14ec7d0de36ee661266a09745ad324cdc6232a6935fddf50716fc6-merged.mount: Deactivated successfully.
Dec  6 04:41:39 np0005548915 podman[92608]: 2025-12-06 09:41:39.617292628 +0000 UTC m=+0.057845340 container remove 65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f (image=quay.io/ceph/ceph:v19, name=nostalgic_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:41:39 np0005548915 systemd[1]: libpod-conmon-65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f.scope: Deactivated successfully.
Dec  6 04:41:39 np0005548915 ceph-mon[74327]: Deploying daemon node-exporter.compute-1 on compute-1
Dec  6 04:41:40 np0005548915 python3[92648]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:41:40 np0005548915 podman[92649]: 2025-12-06 09:41:40.362831833 +0000 UTC m=+0.071762769 container create d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c (image=quay.io/ceph/ceph:v19, name=angry_keller, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:41:40 np0005548915 systemd[1]: Started libpod-conmon-d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c.scope.
Dec  6 04:41:40 np0005548915 podman[92649]: 2025-12-06 09:41:40.332757193 +0000 UTC m=+0.041688199 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:40 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:40 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8487a8ee3ad9b2c6e0b42f4f150ada8c0db349fb0beeb5725c65f96e42ce4a99/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:40 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8487a8ee3ad9b2c6e0b42f4f150ada8c0db349fb0beeb5725c65f96e42ce4a99/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:40 np0005548915 podman[92649]: 2025-12-06 09:41:40.46366624 +0000 UTC m=+0.172597246 container init d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c (image=quay.io/ceph/ceph:v19, name=angry_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec  6 04:41:40 np0005548915 podman[92649]: 2025-12-06 09:41:40.472513819 +0000 UTC m=+0.181444755 container start d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c (image=quay.io/ceph/ceph:v19, name=angry_keller, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:40 np0005548915 podman[92649]: 2025-12-06 09:41:40.47789938 +0000 UTC m=+0.186830326 container attach d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c (image=quay.io/ceph/ceph:v19, name=angry_keller, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Dec  6 04:41:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec  6 04:41:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1032166629' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  6 04:41:40 np0005548915 angry_keller[92664]: [client.openstack]
Dec  6 04:41:40 np0005548915 angry_keller[92664]: #011key = AQA7+TNpAAAAABAABZDZy1tS5Qay3mTps8dAWg==
Dec  6 04:41:40 np0005548915 angry_keller[92664]: #011caps mgr = "allow *"
Dec  6 04:41:40 np0005548915 angry_keller[92664]: #011caps mon = "profile rbd"
Dec  6 04:41:40 np0005548915 angry_keller[92664]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec  6 04:41:40 np0005548915 systemd[1]: libpod-d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c.scope: Deactivated successfully.
Dec  6 04:41:40 np0005548915 podman[92649]: 2025-12-06 09:41:40.943809197 +0000 UTC m=+0.652740103 container died d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c (image=quay.io/ceph/ceph:v19, name=angry_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:41:40 np0005548915 systemd[1]: var-lib-containers-storage-overlay-8487a8ee3ad9b2c6e0b42f4f150ada8c0db349fb0beeb5725c65f96e42ce4a99-merged.mount: Deactivated successfully.
Dec  6 04:41:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v13: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Dec  6 04:41:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:41:40 np0005548915 podman[92649]: 2025-12-06 09:41:40.990704039 +0000 UTC m=+0.699634945 container remove d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c (image=quay.io/ceph/ceph:v19, name=angry_keller, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 04:41:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:41:41 np0005548915 systemd[1]: libpod-conmon-d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c.scope: Deactivated successfully.
Dec  6 04:41:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  6 04:41:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:41 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Dec  6 04:41:41 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Dec  6 04:41:41 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1032166629' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  6 04:41:41 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:41 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:41 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:42 np0005548915 ansible-async_wrapper.py[92850]: Invoked with j901279167954 30 /home/zuul/.ansible/tmp/ansible-tmp-1765014102.0664124-37446-62494592332673/AnsiballZ_command.py _
Dec  6 04:41:42 np0005548915 ansible-async_wrapper.py[92854]: Starting module and watcher
Dec  6 04:41:42 np0005548915 ansible-async_wrapper.py[92854]: Start watching 92855 (30)
Dec  6 04:41:42 np0005548915 ansible-async_wrapper.py[92855]: Start module (92855)
Dec  6 04:41:42 np0005548915 ansible-async_wrapper.py[92850]: Return async_wrapper task started.
Dec  6 04:41:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v14: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: Deploying daemon node-exporter.compute-2 on compute-2
Dec  6 04:41:43 np0005548915 python3[92856]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:43 np0005548915 podman[92857]: 2025-12-06 09:41:43.144867278 +0000 UTC m=+0.059905554 container create f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e (image=quay.io/ceph/ceph:v19, name=great_mendeleev, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:41:43 np0005548915 systemd[1]: Started libpod-conmon-f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e.scope.
Dec  6 04:41:43 np0005548915 podman[92857]: 2025-12-06 09:41:43.119986942 +0000 UTC m=+0.035025258 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:43 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:43 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b316994d80ec05135bdf36b2151c6640fca4a4e58290e000e3f30d526b330d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:43 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b316994d80ec05135bdf36b2151c6640fca4a4e58290e000e3f30d526b330d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:43 np0005548915 podman[92857]: 2025-12-06 09:41:43.239751098 +0000 UTC m=+0.154789384 container init f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e (image=quay.io/ceph/ceph:v19, name=great_mendeleev, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:41:43 np0005548915 podman[92857]: 2025-12-06 09:41:43.247255755 +0000 UTC m=+0.162294021 container start f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e (image=quay.io/ceph/ceph:v19, name=great_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  6 04:41:43 np0005548915 podman[92857]: 2025-12-06 09:41:43.251412156 +0000 UTC m=+0.166450422 container attach f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e (image=quay.io/ceph/ceph:v19, name=great_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:41:43 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  6 04:41:43 np0005548915 great_mendeleev[92872]: 
Dec  6 04:41:43 np0005548915 great_mendeleev[92872]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  6 04:41:43 np0005548915 podman[92857]: 2025-12-06 09:41:43.635757325 +0000 UTC m=+0.550795621 container died f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e (image=quay.io/ceph/ceph:v19, name=great_mendeleev, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 04:41:43 np0005548915 systemd[1]: libpod-f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e.scope: Deactivated successfully.
Dec  6 04:41:43 np0005548915 systemd[1]: var-lib-containers-storage-overlay-b5b316994d80ec05135bdf36b2151c6640fca4a4e58290e000e3f30d526b330d-merged.mount: Deactivated successfully.
Dec  6 04:41:43 np0005548915 podman[92857]: 2025-12-06 09:41:43.682027718 +0000 UTC m=+0.597066024 container remove f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e (image=quay.io/ceph/ceph:v19, name=great_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Dec  6 04:41:43 np0005548915 systemd[1]: libpod-conmon-f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e.scope: Deactivated successfully.
Dec  6 04:41:43 np0005548915 ansible-async_wrapper.py[92855]: Module complete (92855)
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:43 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 718093b7-ae24-4ca4-868b-ad896e0c544f (Updating node-exporter deployment (+3 -> 3))
Dec  6 04:41:43 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 718093b7-ae24-4ca4-868b-ad896e0c544f (Updating node-exporter deployment (+3 -> 3)) in 8 seconds
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:41:43 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 11 completed events
Dec  6 04:41:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:41:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:44 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:44 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:44 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:44 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:44 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:41:44 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:44 np0005548915 python3[93007]: ansible-ansible.legacy.async_status Invoked with jid=j901279167954.92850 mode=status _async_dir=/root/.ansible_async
Dec  6 04:41:44 np0005548915 podman[93095]: 2025-12-06 09:41:44.433407437 +0000 UTC m=+0.046433098 container create 46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sutherland, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Dec  6 04:41:44 np0005548915 systemd[1]: Started libpod-conmon-46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f.scope.
Dec  6 04:41:44 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:44 np0005548915 python3[93087]: ansible-ansible.legacy.async_status Invoked with jid=j901279167954.92850 mode=cleanup _async_dir=/root/.ansible_async
Dec  6 04:41:44 np0005548915 podman[93095]: 2025-12-06 09:41:44.411412672 +0000 UTC m=+0.024438333 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:41:44 np0005548915 podman[93095]: 2025-12-06 09:41:44.507245472 +0000 UTC m=+0.120271143 container init 46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sutherland, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:41:44 np0005548915 podman[93095]: 2025-12-06 09:41:44.517032471 +0000 UTC m=+0.130058132 container start 46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sutherland, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:41:44 np0005548915 podman[93095]: 2025-12-06 09:41:44.520960145 +0000 UTC m=+0.133985796 container attach 46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 04:41:44 np0005548915 eloquent_sutherland[93111]: 167 167
Dec  6 04:41:44 np0005548915 podman[93095]: 2025-12-06 09:41:44.523114553 +0000 UTC m=+0.136140224 container died 46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  6 04:41:44 np0005548915 systemd[1]: libpod-46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f.scope: Deactivated successfully.
Dec  6 04:41:44 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c8269a7b2b5d7079e5c988dc19efeed900be050d8eb8d864b089ca104e866929-merged.mount: Deactivated successfully.
Dec  6 04:41:44 np0005548915 podman[93095]: 2025-12-06 09:41:44.566605168 +0000 UTC m=+0.179630839 container remove 46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sutherland, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:41:44 np0005548915 systemd[1]: libpod-conmon-46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f.scope: Deactivated successfully.
Dec  6 04:41:44 np0005548915 podman[93136]: 2025-12-06 09:41:44.716045301 +0000 UTC m=+0.041143381 container create a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_lehmann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec  6 04:41:44 np0005548915 systemd[1]: Started libpod-conmon-a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e.scope.
Dec  6 04:41:44 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:44 np0005548915 podman[93136]: 2025-12-06 09:41:44.697131053 +0000 UTC m=+0.022229173 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:41:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651a38bf7dd1e882b770b9d478879242b6d26bc484f0b9c089d51080186f7c83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651a38bf7dd1e882b770b9d478879242b6d26bc484f0b9c089d51080186f7c83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651a38bf7dd1e882b770b9d478879242b6d26bc484f0b9c089d51080186f7c83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651a38bf7dd1e882b770b9d478879242b6d26bc484f0b9c089d51080186f7c83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651a38bf7dd1e882b770b9d478879242b6d26bc484f0b9c089d51080186f7c83/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:44 np0005548915 podman[93136]: 2025-12-06 09:41:44.807911575 +0000 UTC m=+0.133009655 container init a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 04:41:44 np0005548915 podman[93136]: 2025-12-06 09:41:44.814204924 +0000 UTC m=+0.139303014 container start a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  6 04:41:44 np0005548915 podman[93136]: 2025-12-06 09:41:44.819347327 +0000 UTC m=+0.144445417 container attach a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_lehmann, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  6 04:41:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v15: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:41:45 np0005548915 python3[93183]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:45 np0005548915 crazy_lehmann[93153]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:41:45 np0005548915 crazy_lehmann[93153]: --> All data devices are unavailable
Dec  6 04:41:45 np0005548915 systemd[1]: libpod-a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e.scope: Deactivated successfully.
Dec  6 04:41:45 np0005548915 podman[93136]: 2025-12-06 09:41:45.24781908 +0000 UTC m=+0.572917200 container died a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_lehmann, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec  6 04:41:45 np0005548915 podman[93193]: 2025-12-06 09:41:45.266344936 +0000 UTC m=+0.079252187 container create 96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60 (image=quay.io/ceph/ceph:v19, name=strange_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Dec  6 04:41:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:41:45 np0005548915 systemd[1]: Started libpod-conmon-96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60.scope.
Dec  6 04:41:45 np0005548915 systemd[1]: var-lib-containers-storage-overlay-651a38bf7dd1e882b770b9d478879242b6d26bc484f0b9c089d51080186f7c83-merged.mount: Deactivated successfully.
Dec  6 04:41:45 np0005548915 podman[93136]: 2025-12-06 09:41:45.314512888 +0000 UTC m=+0.639611008 container remove a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Dec  6 04:41:45 np0005548915 podman[93193]: 2025-12-06 09:41:45.234636663 +0000 UTC m=+0.047543974 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:45 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:45 np0005548915 systemd[1]: libpod-conmon-a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e.scope: Deactivated successfully.
Dec  6 04:41:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9ca6e139fa24d20d0b3da5cba9ecf5d2acf78da76ce88dc44d02f4bab32941/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9ca6e139fa24d20d0b3da5cba9ecf5d2acf78da76ce88dc44d02f4bab32941/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:45 np0005548915 podman[93193]: 2025-12-06 09:41:45.353999236 +0000 UTC m=+0.166906467 container init 96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60 (image=quay.io/ceph/ceph:v19, name=strange_driscoll, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:41:45 np0005548915 podman[93193]: 2025-12-06 09:41:45.362880137 +0000 UTC m=+0.175787348 container start 96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60 (image=quay.io/ceph/ceph:v19, name=strange_driscoll, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:41:45 np0005548915 podman[93193]: 2025-12-06 09:41:45.36616793 +0000 UTC m=+0.179075161 container attach 96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60 (image=quay.io/ceph/ceph:v19, name=strange_driscoll, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:45 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14472 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  6 04:41:45 np0005548915 strange_driscoll[93221]: 
Dec  6 04:41:45 np0005548915 strange_driscoll[93221]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  6 04:41:45 np0005548915 podman[93193]: 2025-12-06 09:41:45.747905407 +0000 UTC m=+0.560812638 container died 96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60 (image=quay.io/ceph/ceph:v19, name=strange_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:41:45 np0005548915 systemd[1]: libpod-96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60.scope: Deactivated successfully.
Dec  6 04:41:45 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6d9ca6e139fa24d20d0b3da5cba9ecf5d2acf78da76ce88dc44d02f4bab32941-merged.mount: Deactivated successfully.
Dec  6 04:41:45 np0005548915 podman[93193]: 2025-12-06 09:41:45.830541939 +0000 UTC m=+0.643449200 container remove 96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60 (image=quay.io/ceph/ceph:v19, name=strange_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  6 04:41:45 np0005548915 systemd[1]: libpod-conmon-96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60.scope: Deactivated successfully.
Dec  6 04:41:45 np0005548915 podman[93349]: 2025-12-06 09:41:45.999552601 +0000 UTC m=+0.063711194 container create c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_elgamal, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:46 np0005548915 systemd[1]: Started libpod-conmon-c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603.scope.
Dec  6 04:41:46 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:46 np0005548915 podman[93349]: 2025-12-06 09:41:45.978220197 +0000 UTC m=+0.042378830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:41:46 np0005548915 podman[93349]: 2025-12-06 09:41:46.082649918 +0000 UTC m=+0.146808601 container init c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_elgamal, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:41:46 np0005548915 podman[93349]: 2025-12-06 09:41:46.091716014 +0000 UTC m=+0.155874637 container start c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  6 04:41:46 np0005548915 podman[93349]: 2025-12-06 09:41:46.095448433 +0000 UTC m=+0.159607056 container attach c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:41:46 np0005548915 suspicious_elgamal[93365]: 167 167
Dec  6 04:41:46 np0005548915 systemd[1]: libpod-c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603.scope: Deactivated successfully.
Dec  6 04:41:46 np0005548915 podman[93349]: 2025-12-06 09:41:46.099761848 +0000 UTC m=+0.163920471 container died c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_elgamal, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:41:46 np0005548915 systemd[1]: var-lib-containers-storage-overlay-12c5ede26755ddfacf199d4f4e403d6f4b35837d17982fccba6e5d0c37d239d1-merged.mount: Deactivated successfully.
Dec  6 04:41:46 np0005548915 podman[93349]: 2025-12-06 09:41:46.146644871 +0000 UTC m=+0.210803474 container remove c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_elgamal, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:41:46 np0005548915 systemd[1]: libpod-conmon-c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603.scope: Deactivated successfully.
Dec  6 04:41:46 np0005548915 podman[93388]: 2025-12-06 09:41:46.358262669 +0000 UTC m=+0.050404884 container create 25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ptolemy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:46 np0005548915 systemd[1]: Started libpod-conmon-25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985.scope.
Dec  6 04:41:46 np0005548915 podman[93388]: 2025-12-06 09:41:46.333929591 +0000 UTC m=+0.026071806 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:41:46 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96caad3c9968dd317a829cec4be08598ad886cf524083da3039c60e1092a204/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96caad3c9968dd317a829cec4be08598ad886cf524083da3039c60e1092a204/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96caad3c9968dd317a829cec4be08598ad886cf524083da3039c60e1092a204/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96caad3c9968dd317a829cec4be08598ad886cf524083da3039c60e1092a204/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:46 np0005548915 podman[93388]: 2025-12-06 09:41:46.462690741 +0000 UTC m=+0.154832986 container init 25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ptolemy, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  6 04:41:46 np0005548915 podman[93388]: 2025-12-06 09:41:46.472262413 +0000 UTC m=+0.164404618 container start 25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ptolemy, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  6 04:41:46 np0005548915 podman[93388]: 2025-12-06 09:41:46.475758633 +0000 UTC m=+0.167900848 container attach 25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]: {
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:    "1": [
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:        {
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:            "devices": [
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:                "/dev/loop3"
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:            ],
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:            "lv_name": "ceph_lv0",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:            "lv_size": "21470642176",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:            "name": "ceph_lv0",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:            "tags": {
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:                "ceph.cluster_name": "ceph",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:                "ceph.crush_device_class": "",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:                "ceph.encrypted": "0",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:                "ceph.osd_id": "1",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:                "ceph.type": "block",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:                "ceph.vdo": "0",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:                "ceph.with_tpm": "0"
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:            },
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:            "type": "block",
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:            "vg_name": "ceph_vg0"
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:        }
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]:    ]
Dec  6 04:41:46 np0005548915 vigilant_ptolemy[93404]: }
Dec  6 04:41:46 np0005548915 systemd[1]: libpod-25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985.scope: Deactivated successfully.
Dec  6 04:41:46 np0005548915 podman[93388]: 2025-12-06 09:41:46.755765804 +0000 UTC m=+0.447908089 container died 25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ptolemy, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:41:46 np0005548915 python3[93434]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:46 np0005548915 systemd[1]: var-lib-containers-storage-overlay-b96caad3c9968dd317a829cec4be08598ad886cf524083da3039c60e1092a204-merged.mount: Deactivated successfully.
Dec  6 04:41:46 np0005548915 podman[93388]: 2025-12-06 09:41:46.822186843 +0000 UTC m=+0.514329038 container remove 25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:46 np0005548915 systemd[1]: libpod-conmon-25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985.scope: Deactivated successfully.
Dec  6 04:41:46 np0005548915 podman[93449]: 2025-12-06 09:41:46.912465477 +0000 UTC m=+0.112925220 container create a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2 (image=quay.io/ceph/ceph:v19, name=great_bohr, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  6 04:41:46 np0005548915 podman[93449]: 2025-12-06 09:41:46.853938217 +0000 UTC m=+0.054397980 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:46 np0005548915 systemd[1]: Started libpod-conmon-a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2.scope.
Dec  6 04:41:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v16: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:41:46 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b394a236b692e5c816f5affce081b32c58f35c50aec856ee4a0981c0fe63e74/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b394a236b692e5c816f5affce081b32c58f35c50aec856ee4a0981c0fe63e74/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:47 np0005548915 podman[93449]: 2025-12-06 09:41:47.007952765 +0000 UTC m=+0.208412508 container init a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2 (image=quay.io/ceph/ceph:v19, name=great_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:41:47 np0005548915 podman[93449]: 2025-12-06 09:41:47.017927181 +0000 UTC m=+0.218386944 container start a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2 (image=quay.io/ceph/ceph:v19, name=great_bohr, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:41:47 np0005548915 podman[93449]: 2025-12-06 09:41:47.021944057 +0000 UTC m=+0.222403820 container attach a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2 (image=quay.io/ceph/ceph:v19, name=great_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  6 04:41:47 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  6 04:41:47 np0005548915 great_bohr[93487]: 
Dec  6 04:41:47 np0005548915 great_bohr[93487]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Dec  6 04:41:47 np0005548915 systemd[1]: libpod-a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2.scope: Deactivated successfully.
Dec  6 04:41:47 np0005548915 podman[93449]: 2025-12-06 09:41:47.493265576 +0000 UTC m=+0.693725359 container died a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2 (image=quay.io/ceph/ceph:v19, name=great_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:41:47 np0005548915 systemd[1]: var-lib-containers-storage-overlay-9b394a236b692e5c816f5affce081b32c58f35c50aec856ee4a0981c0fe63e74-merged.mount: Deactivated successfully.
Dec  6 04:41:47 np0005548915 podman[93449]: 2025-12-06 09:41:47.549681449 +0000 UTC m=+0.750141232 container remove a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2 (image=quay.io/ceph/ceph:v19, name=great_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  6 04:41:47 np0005548915 systemd[1]: libpod-conmon-a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2.scope: Deactivated successfully.
Dec  6 04:41:47 np0005548915 podman[93580]: 2025-12-06 09:41:47.579875542 +0000 UTC m=+0.061093251 container create 97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shtern, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:41:47 np0005548915 systemd[1]: Started libpod-conmon-97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3.scope.
Dec  6 04:41:47 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:47 np0005548915 podman[93580]: 2025-12-06 09:41:47.5604654 +0000 UTC m=+0.041683089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:41:47 np0005548915 podman[93580]: 2025-12-06 09:41:47.65918513 +0000 UTC m=+0.140402829 container init 97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  6 04:41:47 np0005548915 podman[93580]: 2025-12-06 09:41:47.66900683 +0000 UTC m=+0.150224509 container start 97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:41:47 np0005548915 quirky_shtern[93609]: 167 167
Dec  6 04:41:47 np0005548915 podman[93580]: 2025-12-06 09:41:47.672942635 +0000 UTC m=+0.154160334 container attach 97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:47 np0005548915 systemd[1]: libpod-97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3.scope: Deactivated successfully.
Dec  6 04:41:47 np0005548915 podman[93580]: 2025-12-06 09:41:47.67437783 +0000 UTC m=+0.155595499 container died 97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  6 04:41:47 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c754b53438597553d2d807def5819f252153e98b20fad63ebed6602a0d6ebabd-merged.mount: Deactivated successfully.
Dec  6 04:41:47 np0005548915 podman[93580]: 2025-12-06 09:41:47.719506176 +0000 UTC m=+0.200723885 container remove 97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shtern, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Dec  6 04:41:47 np0005548915 systemd[1]: libpod-conmon-97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3.scope: Deactivated successfully.
Dec  6 04:41:47 np0005548915 podman[93633]: 2025-12-06 09:41:47.867745852 +0000 UTC m=+0.044486808 container create eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  6 04:41:47 np0005548915 ansible-async_wrapper.py[92854]: Done in kid B.
Dec  6 04:41:47 np0005548915 systemd[1]: Started libpod-conmon-eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca.scope.
Dec  6 04:41:47 np0005548915 podman[93633]: 2025-12-06 09:41:47.847728089 +0000 UTC m=+0.024469095 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:41:47 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca79c6e5413a858c07c02d44e50cda67c7fe147699e42a33e9cdb5d80ed2940/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca79c6e5413a858c07c02d44e50cda67c7fe147699e42a33e9cdb5d80ed2940/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca79c6e5413a858c07c02d44e50cda67c7fe147699e42a33e9cdb5d80ed2940/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca79c6e5413a858c07c02d44e50cda67c7fe147699e42a33e9cdb5d80ed2940/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:47 np0005548915 podman[93633]: 2025-12-06 09:41:47.967919068 +0000 UTC m=+0.144660084 container init eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_colden, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:41:47 np0005548915 podman[93633]: 2025-12-06 09:41:47.978864134 +0000 UTC m=+0.155605080 container start eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_colden, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  6 04:41:47 np0005548915 podman[93633]: 2025-12-06 09:41:47.983860922 +0000 UTC m=+0.160601968 container attach eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_colden, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  6 04:41:48 np0005548915 python3[93703]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:48 np0005548915 podman[93748]: 2025-12-06 09:41:48.643761021 +0000 UTC m=+0.053025067 container create 1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5 (image=quay.io/ceph/ceph:v19, name=hopeful_pascal, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  6 04:41:48 np0005548915 lvm[93763]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:41:48 np0005548915 lvm[93763]: VG ceph_vg0 finished
Dec  6 04:41:48 np0005548915 busy_colden[93649]: {}
Dec  6 04:41:48 np0005548915 systemd[1]: Started libpod-conmon-1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5.scope.
Dec  6 04:41:48 np0005548915 systemd[1]: libpod-eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca.scope: Deactivated successfully.
Dec  6 04:41:48 np0005548915 podman[93748]: 2025-12-06 09:41:48.62192041 +0000 UTC m=+0.031184506 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:48 np0005548915 conmon[93649]: conmon eaa07fe3830443278573 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca.scope/container/memory.events
Dec  6 04:41:48 np0005548915 systemd[1]: libpod-eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca.scope: Consumed 1.179s CPU time.
Dec  6 04:41:48 np0005548915 podman[93633]: 2025-12-06 09:41:48.717675428 +0000 UTC m=+0.894416404 container died eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_colden, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:41:48 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389c61b318ffe409d59c8cb9ab1140c3d98e4ae3ec830093742d971cb02a636f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389c61b318ffe409d59c8cb9ab1140c3d98e4ae3ec830093742d971cb02a636f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:48 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1ca79c6e5413a858c07c02d44e50cda67c7fe147699e42a33e9cdb5d80ed2940-merged.mount: Deactivated successfully.
Dec  6 04:41:48 np0005548915 podman[93748]: 2025-12-06 09:41:48.754542013 +0000 UTC m=+0.163806059 container init 1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5 (image=quay.io/ceph/ceph:v19, name=hopeful_pascal, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  6 04:41:48 np0005548915 podman[93748]: 2025-12-06 09:41:48.7601454 +0000 UTC m=+0.169409446 container start 1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5 (image=quay.io/ceph/ceph:v19, name=hopeful_pascal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Dec  6 04:41:48 np0005548915 podman[93748]: 2025-12-06 09:41:48.763904678 +0000 UTC m=+0.173168724 container attach 1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5 (image=quay.io/ceph/ceph:v19, name=hopeful_pascal, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:41:48 np0005548915 podman[93633]: 2025-12-06 09:41:48.769790464 +0000 UTC m=+0.946531420 container remove eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 04:41:48 np0005548915 systemd[1]: libpod-conmon-eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca.scope: Deactivated successfully.
Dec  6 04:41:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:41:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:41:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:48 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev b1a841d8-e71a-43d3-ad28-5a44e75485bf (Updating rgw.rgw deployment (+3 -> 3))
Dec  6 04:41:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qizhkr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  6 04:41:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qizhkr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  6 04:41:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qizhkr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  6 04:41:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  6 04:41:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v17: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:41:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:41:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:41:48 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.qizhkr on compute-2
Dec  6 04:41:48 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.qizhkr on compute-2
Dec  6 04:41:49 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14484 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  6 04:41:49 np0005548915 hopeful_pascal[93769]: 
Dec  6 04:41:49 np0005548915 hopeful_pascal[93769]: [{"container_id": "aa22500c4f14", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.12%", "created": "2025-12-06T09:38:26.308407Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-06T09:41:30.897361Z", "memory_usage": 7799308, "ports": [], "service_name": "crash", "started": "2025-12-06T09:38:26.201101Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@crash.compute-0", "version": "19.2.3"}, {"container_id": "500f8c89b5c2", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.41%", "created": "2025-12-06T09:39:19.960123Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-06T09:41:30.961429Z", "memory_usage": 7812939, "ports": [], "service_name": "crash", "started": "2025-12-06T09:39:19.844753Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@crash.compute-1", "version": "19.2.3"}, {"container_id": "29aae73f62af", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.38%", "created": "2025-12-06T09:40:42.007499Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-06T09:41:30.631523Z", "memory_usage": 7808745, "ports": [], "service_name": "crash", "started": "2025-12-06T09:40:41.208240Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@crash.compute-2", "version": "19.2.3"}, {"container_id": "815d2c9c324f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "24.20%", "created": "2025-12-06T09:37:46.726645Z", "daemon_id": "compute-0.qhdjwa", "daemon_name": "mgr.compute-0.qhdjwa", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-06T09:41:30.897149Z", "memory_usage": 543686656, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-06T09:37:46.196694Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mgr.compute-0.qhdjwa", "version": "19.2.3"}, {"container_id": "66d946b34f90", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "49.99%", "created": "2025-12-06T09:40:29.037886Z", "daemon_id": "compute-1.sauzid", "daemon_name": "mgr.compute-1.sauzid", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-06T09:41:30.961804Z", "memory_usage": 496186163, "ports": [8765], "service_name": "mgr", "started": "2025-12-06T09:40:28.901022Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mgr.compute-1.sauzid", "version": "19.2.3"}, {"container_id": "4821735c9154", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "35.81%", "created": "2025-12-06T09:40:21.865862Z", "daemon_id": "compute-2.oazbvn", "daemon_name": "mgr.compute-2.oazbvn", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-06T09:41:30.631430Z", "memory_usage": 504574771, "ports": [8765], "service_name": "mgr", "started": "2025-12-06T09:40:21.774842Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mgr.compute-2.oazbvn", "version": "19.2.3"}, {"container_id": "484d6ed1039c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.48%", "created": "2025-12-06T09:37:41.200513Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-06T09:41:30.896943Z", "memory_request": 2147483648, "memory_usage": 61886955, "ports": [], "service_name": "mon", "started": "2025-12-06T09:37:43.583790Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mon.compute-0", "version": "19.2.3"}, {"container_id": "d320de814b27", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.40%", "created": "2025-12-06T09:40:12.577566Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-06T09:41:30.961697Z", "memory_request": 2147483648, "memory_usage": 45529169, "ports": [], "service_name": "mon", "started": "2025-12-06T09:40:12.471377Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mon.compute-1", "version": "19.2.3"}, {"container_id": "9800312b2542", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.78%", "created": "2025-12-06T09:40:09.874483Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-06T09:41:30.631326Z", "memory_request": 2147483648, "memory_usage": 49398415, "ports": [], "service_name": "mon", "started": "2025-12-06T09:40:08.953427Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mon.compute-2", "version": "19.2.3"}, {"daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "events": ["2025-12-06T09:41:38.394457Z daemon:node-exporter.compu
Dec  6 04:41:49 np0005548915 hopeful_pascal[93769]: "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-1", "daemon_name": "node-exporter.compute-1", "daemon_type": "node-exporter", "events": ["2025-12-06T09:41:41.015939Z daemon:node-exporter.compute-1 [INFO] \"Deployed node-exporter.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2", "daemon_name": "node-exporter.compute-2", "daemon_type": "node-exporter", "events": ["2025-12-06T09:41:43.837357Z daemon:node-exporter.compute-2 [INFO] \"Deployed node-exporter.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"container_id": "1aa09529261e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.69%", "created": "2025-12-06T09:39:35.584331Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-06T09:41:30.897606Z", "memory_request": 4294967296, "memory_usage": 68933386, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-06T09:39:34.881862Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@osd.1", "version": "19.2.3"}, {"container_id": "0f0393491dd0", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.17%", "created": "2025-12-06T09:39:32.740564Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-06T09:41:30.961579Z", "memory_request": 5502921113, "memory_usage": 70925680, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-06T09:39:32.604587Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@osd.0", "version": "19.2.3"}, {"container_id": "446ec9caaae7", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "3.28%", "created": "2025-12-06T09:40:59.214000Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-06T09:41:30.631597Z", "memory_request": 4294967296, "memory_usage": 64529367, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-06T09:40:59.105219Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@osd.2", "version": "19.2.3"}]
Dec  6 04:41:49 np0005548915 systemd[1]: libpod-1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5.scope: Deactivated successfully.
Dec  6 04:41:49 np0005548915 podman[93748]: 2025-12-06 09:41:49.248616219 +0000 UTC m=+0.657880265 container died 1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5 (image=quay.io/ceph/ceph:v19, name=hopeful_pascal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 04:41:49 np0005548915 systemd[1]: var-lib-containers-storage-overlay-389c61b318ffe409d59c8cb9ab1140c3d98e4ae3ec830093742d971cb02a636f-merged.mount: Deactivated successfully.
Dec  6 04:41:49 np0005548915 podman[93748]: 2025-12-06 09:41:49.301560493 +0000 UTC m=+0.710824549 container remove 1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5 (image=quay.io/ceph/ceph:v19, name=hopeful_pascal, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:41:49 np0005548915 systemd[1]: libpod-conmon-1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5.scope: Deactivated successfully.
Dec  6 04:41:49 np0005548915 rsyslogd[1004]: message too long (8192) with configured size 8096, begin of message is: [{"container_id": "aa22500c4f14", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  6 04:41:49 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:49 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:49 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qizhkr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  6 04:41:49 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qizhkr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  6 04:41:49 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:49 np0005548915 ceph-mon[74327]: Deploying daemon rgw.rgw.compute-2.qizhkr on compute-2
Dec  6 04:41:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:41:50 np0005548915 python3[93842]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:50 np0005548915 podman[93843]: 2025-12-06 09:41:50.438708566 +0000 UTC m=+0.068360882 container create c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462 (image=quay.io/ceph/ceph:v19, name=sharp_beaver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  6 04:41:50 np0005548915 podman[93843]: 2025-12-06 09:41:50.406665283 +0000 UTC m=+0.036317669 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:50 np0005548915 systemd[1]: Started libpod-conmon-c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462.scope.
Dec  6 04:41:50 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b6da106f69b92129fdeb0d774d400de61dbe4c21747582cf5b5048cc0d80e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b6da106f69b92129fdeb0d774d400de61dbe4c21747582cf5b5048cc0d80e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:50 np0005548915 podman[93843]: 2025-12-06 09:41:50.559896516 +0000 UTC m=+0.189548912 container init c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462 (image=quay.io/ceph/ceph:v19, name=sharp_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec  6 04:41:50 np0005548915 podman[93843]: 2025-12-06 09:41:50.567956661 +0000 UTC m=+0.197609007 container start c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462 (image=quay.io/ceph/ceph:v19, name=sharp_beaver, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:41:50 np0005548915 podman[93843]: 2025-12-06 09:41:50.572124293 +0000 UTC m=+0.201776629 container attach c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462 (image=quay.io/ceph/ceph:v19, name=sharp_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:41:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:41:50 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:41:50 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  6 04:41:50 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oqhsdh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  6 04:41:50 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oqhsdh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  6 04:41:50 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oqhsdh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  6 04:41:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  6 04:41:50 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:41:50 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:41:50 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.oqhsdh on compute-1
Dec  6 04:41:50 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.oqhsdh on compute-1
Dec  6 04:41:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v18: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:41:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  6 04:41:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506900584' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  6 04:41:51 np0005548915 sharp_beaver[93858]: 
Dec  6 04:41:51 np0005548915 sharp_beaver[93858]: {"fsid":"5ecd3f74-dade-5fc4-92ce-8950ae424258","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":85,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1765014071,"num_in_osds":3,"osd_in_since":1765014049,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":132}],"num_pgs":132,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":84246528,"bytes_avail":64327680000,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2025-12-06T09:41:29:967825+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":4,"modified":"2025-12-06T09:40:50.551863+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.sauzid":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.oazbvn":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"b1a841d8-e71a-43d3-ad28-5a44e75485bf":{"message":"Updating rgw.rgw deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec  6 04:41:51 np0005548915 systemd[1]: libpod-c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462.scope: Deactivated successfully.
Dec  6 04:41:51 np0005548915 podman[93843]: 2025-12-06 09:41:51.053463897 +0000 UTC m=+0.683116243 container died c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462 (image=quay.io/ceph/ceph:v19, name=sharp_beaver, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:41:51 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f2b6da106f69b92129fdeb0d774d400de61dbe4c21747582cf5b5048cc0d80e6-merged.mount: Deactivated successfully.
Dec  6 04:41:51 np0005548915 podman[93843]: 2025-12-06 09:41:51.112566516 +0000 UTC m=+0.742218862 container remove c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462 (image=quay.io/ceph/ceph:v19, name=sharp_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  6 04:41:51 np0005548915 systemd[1]: libpod-conmon-c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462.scope: Deactivated successfully.
Dec  6 04:41:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec  6 04:41:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec  6 04:41:51 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec  6 04:41:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec  6 04:41:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  6 04:41:51 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:51 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:51 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:51 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oqhsdh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  6 04:41:51 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oqhsdh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  6 04:41:51 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:51 np0005548915 ceph-mon[74327]: Deploying daemon rgw.rgw.compute-1.oqhsdh on compute-1
Dec  6 04:41:52 np0005548915 python3[93922]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:52 np0005548915 podman[93923]: 2025-12-06 09:41:52.179165359 +0000 UTC m=+0.063183318 container create 0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f (image=quay.io/ceph/ceph:v19, name=nice_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Dec  6 04:41:52 np0005548915 systemd[1]: Started libpod-conmon-0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f.scope.
Dec  6 04:41:52 np0005548915 podman[93923]: 2025-12-06 09:41:52.149187562 +0000 UTC m=+0.033205591 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:52 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:52 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f74762e91d63a927082935ea18b2985efbf631256528391147f248ed5108898/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:52 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f74762e91d63a927082935ea18b2985efbf631256528391147f248ed5108898/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:52 np0005548915 podman[93923]: 2025-12-06 09:41:52.289448615 +0000 UTC m=+0.173466614 container init 0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f (image=quay.io/ceph/ceph:v19, name=nice_johnson, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  6 04:41:52 np0005548915 podman[93923]: 2025-12-06 09:41:52.297759148 +0000 UTC m=+0.181777107 container start 0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f (image=quay.io/ceph/ceph:v19, name=nice_johnson, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  6 04:41:52 np0005548915 podman[93923]: 2025-12-06 09:41:52.301369592 +0000 UTC m=+0.185387561 container attach 0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f (image=quay.io/ceph/ceph:v19, name=nice_johnson, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zktslo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zktslo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zktslo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:41:52 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.zktslo on compute-0
Dec  6 04:41:52 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.zktslo on compute-0
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1220877648' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  6 04:41:52 np0005548915 nice_johnson[93938]: 
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.102:0/3027759423' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zktslo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zktslo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:52 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  6 04:41:52 np0005548915 systemd[1]: libpod-0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f.scope: Deactivated successfully.
Dec  6 04:41:52 np0005548915 nice_johnson[93938]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.qhdjwa/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.sauzid/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.oazbvn/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502921113","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.zktslo","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.oqhsdh","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.qizhkr","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec  6 04:41:52 np0005548915 podman[93923]: 2025-12-06 09:41:52.728269895 +0000 UTC m=+0.612287874 container died 0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f (image=quay.io/ceph/ceph:v19, name=nice_johnson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:52 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6f74762e91d63a927082935ea18b2985efbf631256528391147f248ed5108898-merged.mount: Deactivated successfully.
Dec  6 04:41:52 np0005548915 podman[93923]: 2025-12-06 09:41:52.769148558 +0000 UTC m=+0.653166537 container remove 0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f (image=quay.io/ceph/ceph:v19, name=nice_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Dec  6 04:41:52 np0005548915 systemd[1]: libpod-conmon-0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f.scope: Deactivated successfully.
Dec  6 04:41:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v21: 133 pgs: 1 unknown, 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:41:53 np0005548915 podman[94068]: 2025-12-06 09:41:53.21562217 +0000 UTC m=+0.051304393 container create 46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:41:53 np0005548915 systemd[1]: Started libpod-conmon-46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2.scope.
Dec  6 04:41:53 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:53 np0005548915 podman[94068]: 2025-12-06 09:41:53.198939022 +0000 UTC m=+0.034621275 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:41:53 np0005548915 podman[94068]: 2025-12-06 09:41:53.306363279 +0000 UTC m=+0.142045522 container init 46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swartz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:41:53 np0005548915 podman[94068]: 2025-12-06 09:41:53.318261454 +0000 UTC m=+0.153943717 container start 46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swartz, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  6 04:41:53 np0005548915 podman[94068]: 2025-12-06 09:41:53.322812969 +0000 UTC m=+0.158495222 container attach 46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Dec  6 04:41:53 np0005548915 eager_swartz[94085]: 167 167
Dec  6 04:41:53 np0005548915 systemd[1]: libpod-46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2.scope: Deactivated successfully.
Dec  6 04:41:53 np0005548915 podman[94068]: 2025-12-06 09:41:53.32698037 +0000 UTC m=+0.162662673 container died 46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:41:53 np0005548915 systemd[1]: var-lib-containers-storage-overlay-be0623897aff7406f3c0d3c7d8f301cfb511ba108babb159bcb53e2cef3c846b-merged.mount: Deactivated successfully.
Dec  6 04:41:53 np0005548915 podman[94068]: 2025-12-06 09:41:53.378969734 +0000 UTC m=+0.214651977 container remove 46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swartz, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:41:53 np0005548915 systemd[1]: libpod-conmon-46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2.scope: Deactivated successfully.
Dec  6 04:41:53 np0005548915 systemd[1]: Reloading.
Dec  6 04:41:53 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:41:53 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:41:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec  6 04:41:53 np0005548915 ceph-mon[74327]: Deploying daemon rgw.rgw.compute-0.zktslo on compute-0
Dec  6 04:41:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec  6 04:41:53 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec  6 04:41:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 45 pg[10.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:41:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec  6 04:41:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  6 04:41:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec  6 04:41:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  6 04:41:53 np0005548915 systemd[1]: Reloading.
Dec  6 04:41:53 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:41:53 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:41:53 np0005548915 python3[94167]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:53 np0005548915 podman[94205]: 2025-12-06 09:41:53.936344232 +0000 UTC m=+0.044081105 container create dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5 (image=quay.io/ceph/ceph:v19, name=priceless_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  6 04:41:54 np0005548915 ceph-mgr[74618]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Dec  6 04:41:54 np0005548915 podman[94205]: 2025-12-06 09:41:53.919160507 +0000 UTC m=+0.026897400 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:54 np0005548915 systemd[1]: Started libpod-conmon-dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5.scope.
Dec  6 04:41:54 np0005548915 systemd[1]: Starting Ceph rgw.rgw.compute-0.zktslo for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:41:54 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39486e697181db7e753b1bcfd97b5d4957667be1f4386fcd828e84ba9f1d4042/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39486e697181db7e753b1bcfd97b5d4957667be1f4386fcd828e84ba9f1d4042/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:54 np0005548915 podman[94205]: 2025-12-06 09:41:54.096892716 +0000 UTC m=+0.204629609 container init dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5 (image=quay.io/ceph/ceph:v19, name=priceless_franklin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:41:54 np0005548915 podman[94205]: 2025-12-06 09:41:54.107948655 +0000 UTC m=+0.215685538 container start dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5 (image=quay.io/ceph/ceph:v19, name=priceless_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:54 np0005548915 podman[94205]: 2025-12-06 09:41:54.111731985 +0000 UTC m=+0.219468878 container attach dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5 (image=quay.io/ceph/ceph:v19, name=priceless_franklin, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Dec  6 04:41:54 np0005548915 podman[94288]: 2025-12-06 09:41:54.34410557 +0000 UTC m=+0.060390770 container create cd7b3967b1bb2f029aa8c00ef30e195138373f6bcb66e3b5e086c9bb835b3595 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-rgw-rgw-compute-0-zktslo, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  6 04:41:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18200210550dac29a943ea61688b85dcf2ec2ac002e352616922dae8472389d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18200210550dac29a943ea61688b85dcf2ec2ac002e352616922dae8472389d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18200210550dac29a943ea61688b85dcf2ec2ac002e352616922dae8472389d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18200210550dac29a943ea61688b85dcf2ec2ac002e352616922dae8472389d/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.zktslo supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:54 np0005548915 podman[94288]: 2025-12-06 09:41:54.311408276 +0000 UTC m=+0.027693476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:41:54 np0005548915 podman[94288]: 2025-12-06 09:41:54.41718979 +0000 UTC m=+0.133474950 container init cd7b3967b1bb2f029aa8c00ef30e195138373f6bcb66e3b5e086c9bb835b3595 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-rgw-rgw-compute-0-zktslo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  6 04:41:54 np0005548915 podman[94288]: 2025-12-06 09:41:54.430035366 +0000 UTC m=+0.146320526 container start cd7b3967b1bb2f029aa8c00ef30e195138373f6bcb66e3b5e086c9bb835b3595 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-rgw-rgw-compute-0-zktslo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 04:41:54 np0005548915 bash[94288]: cd7b3967b1bb2f029aa8c00ef30e195138373f6bcb66e3b5e086c9bb835b3595
Dec  6 04:41:54 np0005548915 systemd[1]: Started Ceph rgw.rgw.compute-0.zktslo for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/702722184' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec  6 04:41:54 np0005548915 priceless_franklin[94222]: mimic
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:41:54 np0005548915 systemd[1]: libpod-dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5.scope: Deactivated successfully.
Dec  6 04:41:54 np0005548915 radosgw[94308]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec  6 04:41:54 np0005548915 radosgw[94308]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Dec  6 04:41:54 np0005548915 radosgw[94308]: framework: beast
Dec  6 04:41:54 np0005548915 radosgw[94308]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec  6 04:41:54 np0005548915 radosgw[94308]: init_numa not setting numa affinity
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:54 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev b1a841d8-e71a-43d3-ad28-5a44e75485bf (Updating rgw.rgw deployment (+3 -> 3))
Dec  6 04:41:54 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event b1a841d8-e71a-43d3-ad28-5a44e75485bf (Updating rgw.rgw deployment (+3 -> 3)) in 6 seconds
Dec  6 04:41:54 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  6 04:41:54 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  6 04:41:54 np0005548915 podman[94330]: 2025-12-06 09:41:54.560319144 +0000 UTC m=+0.036012760 container died dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5 (image=quay.io/ceph/ceph:v19, name=priceless_franklin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:54 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 0e6a1a60-47ae-48b4-a96e-88b7fa58d89d (Updating mds.cephfs deployment (+3 -> 3))
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.czucwy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.czucwy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.czucwy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  6 04:41:54 np0005548915 systemd[1]: var-lib-containers-storage-overlay-39486e697181db7e753b1bcfd97b5d4957667be1f4386fcd828e84ba9f1d4042-merged.mount: Deactivated successfully.
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:41:54 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.czucwy on compute-2
Dec  6 04:41:54 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.czucwy on compute-2
Dec  6 04:41:54 np0005548915 podman[94330]: 2025-12-06 09:41:54.603877521 +0000 UTC m=+0.079571137 container remove dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5 (image=quay.io/ceph/ceph:v19, name=priceless_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:41:54 np0005548915 systemd[1]: libpod-conmon-dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5.scope: Deactivated successfully.
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.101:0/4120731466' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.102:0/827372016' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.czucwy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.czucwy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec  6 04:41:54 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec  6 04:41:54 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 46 pg[10.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:41:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v24: 134 pgs: 134 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec  6 04:41:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:41:55 np0005548915 python3[94944]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:41:55 np0005548915 podman[94945]: 2025-12-06 09:41:55.694837984 +0000 UTC m=+0.051202339 container create 7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5 (image=quay.io/ceph/ceph:v19, name=agitated_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:41:55 np0005548915 systemd[1]: Started libpod-conmon-7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5.scope.
Dec  6 04:41:55 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c35ce07fa37af7d2f4a0cc093a2ee278c7fcf237bead2739d303d3cdce3e15/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c35ce07fa37af7d2f4a0cc093a2ee278c7fcf237bead2739d303d3cdce3e15/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec  6 04:41:55 np0005548915 podman[94945]: 2025-12-06 09:41:55.676049901 +0000 UTC m=+0.032414266 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:41:55 np0005548915 podman[94945]: 2025-12-06 09:41:55.778514999 +0000 UTC m=+0.134879374 container init 7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5 (image=quay.io/ceph/ceph:v19, name=agitated_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  6 04:41:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec  6 04:41:55 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec  6 04:41:55 np0005548915 podman[94945]: 2025-12-06 09:41:55.78739103 +0000 UTC m=+0.143755385 container start 7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5 (image=quay.io/ceph/ceph:v19, name=agitated_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  6 04:41:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec  6 04:41:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  6 04:41:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec  6 04:41:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  6 04:41:55 np0005548915 podman[94945]: 2025-12-06 09:41:55.791410757 +0000 UTC m=+0.147775132 container attach 7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5 (image=quay.io/ceph/ceph:v19, name=agitated_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 04:41:55 np0005548915 ceph-mon[74327]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  6 04:41:55 np0005548915 ceph-mon[74327]: Deploying daemon mds.cephfs.compute-2.czucwy on compute-2
Dec  6 04:41:55 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  6 04:41:55 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  6 04:41:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec  6 04:41:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  6 04:41:56 np0005548915 agitated_mirzakhani[94960]: 
Dec  6 04:41:56 np0005548915 agitated_mirzakhani[94960]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":9}}
Dec  6 04:41:56 np0005548915 systemd[1]: libpod-7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5.scope: Deactivated successfully.
Dec  6 04:41:56 np0005548915 podman[94945]: 2025-12-06 09:41:56.251892732 +0000 UTC m=+0.608257087 container died 7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5 (image=quay.io/ceph/ceph:v19, name=agitated_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  6 04:41:56 np0005548915 systemd[1]: var-lib-containers-storage-overlay-b5c35ce07fa37af7d2f4a0cc093a2ee278c7fcf237bead2739d303d3cdce3e15-merged.mount: Deactivated successfully.
Dec  6 04:41:56 np0005548915 podman[94945]: 2025-12-06 09:41:56.296010137 +0000 UTC m=+0.652374522 container remove 7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5 (image=quay.io/ceph/ceph:v19, name=agitated_mirzakhani, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:41:56 np0005548915 systemd[1]: libpod-conmon-7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5.scope: Deactivated successfully.
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ujokui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ujokui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ujokui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:41:56 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.ujokui on compute-0
Dec  6 04:41:56 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.ujokui on compute-0
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.101:0/4120731466' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.102:0/827372016' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ujokui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ujokui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: Deploying daemon mds.cephfs.compute-0.ujokui on compute-0
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e3 new map
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2025-12-06T09:41:56:804272+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T09:41:29.967778+0000#012modified#0112025-12-06T09:41:29.967778+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.czucwy{-1:24274} state up:standby seq 1 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] up:boot
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] as mds.0
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.czucwy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.czucwy"} v 0)
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.czucwy"}]: dispatch
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e3 all = 0
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e4 new map
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2025-12-06T09:41:56:835698+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T09:41:29.967778+0000#012modified#0112025-12-06T09:41:56.835690+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24274}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.czucwy{0:24274} state up:creating seq 1 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:creating}
Dec  6 04:41:56 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.czucwy is now active in filesystem cephfs as rank 0
Dec  6 04:41:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v27: 135 pgs: 1 unknown, 134 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec  6 04:41:57 np0005548915 podman[95091]: 2025-12-06 09:41:57.043710291 +0000 UTC m=+0.043034822 container create c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:41:57 np0005548915 systemd[1]: Started libpod-conmon-c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de.scope.
Dec  6 04:41:57 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:41:57 np0005548915 podman[95091]: 2025-12-06 09:41:57.026151345 +0000 UTC m=+0.025475916 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:41:57 np0005548915 podman[95091]: 2025-12-06 09:41:57.136317028 +0000 UTC m=+0.135641609 container init c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  6 04:41:57 np0005548915 podman[95091]: 2025-12-06 09:41:57.14809963 +0000 UTC m=+0.147424181 container start c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:41:57 np0005548915 podman[95091]: 2025-12-06 09:41:57.151630151 +0000 UTC m=+0.150954722 container attach c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  6 04:41:57 np0005548915 xenodochial_bartik[95109]: 167 167
Dec  6 04:41:57 np0005548915 systemd[1]: libpod-c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de.scope: Deactivated successfully.
Dec  6 04:41:57 np0005548915 podman[95091]: 2025-12-06 09:41:57.155566296 +0000 UTC m=+0.154890867 container died c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:57 np0005548915 systemd[1]: var-lib-containers-storage-overlay-42e985aa3c60d1b258a21f70173d5e202c9e7822cd17a389b3f18401030939c9-merged.mount: Deactivated successfully.
Dec  6 04:41:57 np0005548915 podman[95091]: 2025-12-06 09:41:57.204262885 +0000 UTC m=+0.203587426 container remove c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec  6 04:41:57 np0005548915 systemd[1]: libpod-conmon-c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de.scope: Deactivated successfully.
Dec  6 04:41:57 np0005548915 systemd[1]: Reloading.
Dec  6 04:41:57 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:41:57 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:41:57 np0005548915 systemd[1]: Reloading.
Dec  6 04:41:57 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:41:57 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: daemon mds.cephfs.compute-2.czucwy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  6 04:41:57 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: Cluster is now healthy
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: daemon mds.cephfs.compute-2.czucwy is now active in filesystem cephfs as rank 0
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e5 new map
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2025-12-06T09:41:57:856282+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T09:41:29.967778+0000#012modified#0112025-12-06T09:41:57.856277+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24274}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24274 members: 24274#012[mds.cephfs.compute-2.czucwy{0:24274} state up:active seq 2 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] up:active
Dec  6 04:41:57 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active}
Dec  6 04:41:57 np0005548915 systemd[1]: Starting Ceph mds.cephfs.compute-0.ujokui for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:41:57 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 49 pg[12.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:41:58 np0005548915 podman[95252]: 2025-12-06 09:41:58.286215024 +0000 UTC m=+0.057058174 container create 015a304559ae181e1b0642f0ff1f7e69af56fbc7a58f131509cd368a144f8717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mds-cephfs-compute-0-ujokui, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:41:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20aaacd9fbc52ba455391e1c964509042df65092e3cbfe559b2a43f528052014/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20aaacd9fbc52ba455391e1c964509042df65092e3cbfe559b2a43f528052014/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20aaacd9fbc52ba455391e1c964509042df65092e3cbfe559b2a43f528052014/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20aaacd9fbc52ba455391e1c964509042df65092e3cbfe559b2a43f528052014/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.ujokui supports timestamps until 2038 (0x7fffffff)
Dec  6 04:41:58 np0005548915 podman[95252]: 2025-12-06 09:41:58.263140355 +0000 UTC m=+0.033983505 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:41:58 np0005548915 podman[95252]: 2025-12-06 09:41:58.36140247 +0000 UTC m=+0.132245621 container init 015a304559ae181e1b0642f0ff1f7e69af56fbc7a58f131509cd368a144f8717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mds-cephfs-compute-0-ujokui, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:41:58 np0005548915 podman[95252]: 2025-12-06 09:41:58.371391727 +0000 UTC m=+0.142234877 container start 015a304559ae181e1b0642f0ff1f7e69af56fbc7a58f131509cd368a144f8717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mds-cephfs-compute-0-ujokui, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  6 04:41:58 np0005548915 bash[95252]: 015a304559ae181e1b0642f0ff1f7e69af56fbc7a58f131509cd368a144f8717
Dec  6 04:41:58 np0005548915 systemd[1]: Started Ceph mds.cephfs.compute-0.ujokui for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:41:58 np0005548915 ceph-mds[95272]: set uid:gid to 167:167 (ceph:ceph)
Dec  6 04:41:58 np0005548915 ceph-mds[95272]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Dec  6 04:41:58 np0005548915 ceph-mds[95272]: main not setting numa affinity
Dec  6 04:41:58 np0005548915 ceph-mds[95272]: pidfile_write: ignore empty --pid-file
Dec  6 04:41:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mds-cephfs-compute-0-ujokui[95268]: starting mds.cephfs.compute-0.ujokui at 
Dec  6 04:41:58 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Updating MDS map to version 5 from mon.0
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fpvjgb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fpvjgb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fpvjgb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:41:58 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.fpvjgb on compute-1
Dec  6 04:41:58 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.fpvjgb on compute-1
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  6 04:41:58 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 50 pg[12.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.101:0/4120731466' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.102:0/827372016' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fpvjgb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fpvjgb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: Deploying daemon mds.cephfs.compute-1.fpvjgb on compute-1
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e6 new map
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e6 print_map#012e6#012btime 2025-12-06T09:41:58:872230+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T09:41:29.967778+0000#012modified#0112025-12-06T09:41:57.856277+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24274}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24274 members: 24274#012[mds.cephfs.compute-2.czucwy{0:24274} state up:active seq 2 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.ujokui{-1:14544} state up:standby seq 1 addr [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] compat {c=[1],r=[1],i=[1fff]}]
Dec  6 04:41:58 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Updating MDS map to version 6 from mon.0
Dec  6 04:41:58 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Monitors have assigned me to become a standby
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] up:boot
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active} 1 up:standby
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ujokui"} v 0)
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ujokui"}]: dispatch
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e6 all = 0
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e7 new map
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e7 print_map#012e7#012btime 2025-12-06T09:41:58:889029+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T09:41:29.967778+0000#012modified#0112025-12-06T09:41:57.856277+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24274}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24274 members: 24274#012[mds.cephfs.compute-2.czucwy{0:24274} state up:active seq 2 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.ujokui{-1:14544} state up:standby seq 1 addr [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] compat {c=[1],r=[1],i=[1fff]}]
Dec  6 04:41:58 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active} 1 up:standby
Dec  6 04:41:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v30: 136 pgs: 2 unknown, 134 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:41:59 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 12 completed events
Dec  6 04:41:59 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:41:59 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:41:59 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:41:59 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:59 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:41:59 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.101:0/4120731466' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.102:0/827372016' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  6 04:41:59 np0005548915 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  6 04:42:00 np0005548915 radosgw[94308]: v1 topic migration: starting v1 topic migration..
Dec  6 04:42:00 np0005548915 radosgw[94308]: LDAP not started since no server URIs were provided in the configuration.
Dec  6 04:42:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-rgw-rgw-compute-0-zktslo[94304]: 2025-12-06T09:42:00.101+0000 7f551790e980 -1 LDAP not started since no server URIs were provided in the configuration.
Dec  6 04:42:00 np0005548915 radosgw[94308]: v1 topic migration: finished v1 topic migration
Dec  6 04:42:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Dec  6 04:42:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Dec  6 04:42:00 np0005548915 radosgw[94308]: framework: beast
Dec  6 04:42:00 np0005548915 radosgw[94308]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec  6 04:42:00 np0005548915 radosgw[94308]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec  6 04:42:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Dec  6 04:42:00 np0005548915 radosgw[94308]: starting handler: beast
Dec  6 04:42:00 np0005548915 radosgw[94308]: set uid:gid to 167:167 (ceph:ceph)
Dec  6 04:42:00 np0005548915 radosgw[94308]: mgrc service_daemon_register rgw.14532 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.zktslo,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=d81f60a3-cfd4-40b3-a809-ad3aae1b1fd0,zone_name=default,zonegroup_id=75773215-ab74-4afd-a4c0-f777a01e4a1a,zonegroup_name=default}
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:00 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 0e6a1a60-47ae-48b4-a96e-88b7fa58d89d (Updating mds.cephfs deployment (+3 -> 3))
Dec  6 04:42:00 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 0e6a1a60-47ae-48b4-a96e-88b7fa58d89d (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:00 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 9a6db2cb-2f95-4ec5-a56e-3692847dbc20 (Updating nfs.cephfs deployment (+3 -> 3))
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:00 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.djsnbu
Dec  6 04:42:00 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.djsnbu
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  6 04:42:00 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec  6 04:42:00 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e8 new map
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e8 print_map#012e8#012btime 2025-12-06T09:42:00:908587+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T09:41:29.967778+0000#012modified#0112025-12-06T09:42:00.880325+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24274}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24274 members: 24274#012[mds.cephfs.compute-2.czucwy{0:24274} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.ujokui{-1:14544} state up:standby seq 1 addr [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.fpvjgb{-1:24215} state up:standby seq 1 addr [v2:192.168.122.101:6804/2619956440,v1:192.168.122.101:6805/2619956440] compat {c=[1],r=[1],i=[1fff]}]
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2619956440,v1:192.168.122.101:6805/2619956440] up:boot
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] up:active
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active} 2 up:standby
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.fpvjgb"} v 0)
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.fpvjgb"}]: dispatch
Dec  6 04:42:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e8 all = 0
Dec  6 04:42:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v32: 136 pgs: 136 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 7.4 KiB/s wr, 29 op/s
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  6 04:42:01 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec  6 04:42:01 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec  6 04:42:01 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.djsnbu-rgw
Dec  6 04:42:01 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.djsnbu-rgw
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  6 04:42:01 np0005548915 ceph-mgr[74618]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.djsnbu's ganesha conf is defaulting to empty
Dec  6 04:42:01 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.djsnbu's ganesha conf is defaulting to empty
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:42:01 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.djsnbu on compute-1
Dec  6 04:42:01 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.djsnbu on compute-1
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  6 04:42:01 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  6 04:42:02 np0005548915 ceph-mon[74327]: Creating key for client.nfs.cephfs.0.0.compute-1.djsnbu
Dec  6 04:42:02 np0005548915 ceph-mon[74327]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec  6 04:42:02 np0005548915 ceph-mon[74327]: Rados config object exists: conf-nfs.cephfs
Dec  6 04:42:02 np0005548915 ceph-mon[74327]: Creating key for client.nfs.cephfs.0.0.compute-1.djsnbu-rgw
Dec  6 04:42:02 np0005548915 ceph-mon[74327]: Bind address in nfs.cephfs.0.0.compute-1.djsnbu's ganesha conf is defaulting to empty
Dec  6 04:42:02 np0005548915 ceph-mon[74327]: Deploying daemon nfs.cephfs.0.0.compute-1.djsnbu on compute-1
Dec  6 04:42:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e9 new map
Dec  6 04:42:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e9 print_map#012e9#012btime 2025-12-06T09:42:02:933823+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T09:41:29.967778+0000#012modified#0112025-12-06T09:42:00.880325+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24274}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24274 members: 24274#012[mds.cephfs.compute-2.czucwy{0:24274} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.ujokui{-1:14544} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.fpvjgb{-1:24215} state up:standby seq 1 addr [v2:192.168.122.101:6804/2619956440,v1:192.168.122.101:6805/2619956440] compat {c=[1],r=[1],i=[1fff]}]
Dec  6 04:42:02 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Updating MDS map to version 9 from mon.0
Dec  6 04:42:02 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] up:standby
Dec  6 04:42:02 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active} 2 up:standby
Dec  6 04:42:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v33: 136 pgs: 136 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 5.2 KiB/s wr, 20 op/s
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:03 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.sseuqb
Dec  6 04:42:03 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.sseuqb
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  6 04:42:03 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec  6 04:42:03 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: Creating key for client.nfs.cephfs.1.0.compute-2.sseuqb
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  6 04:42:03 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  6 04:42:04 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 13 completed events
Dec  6 04:42:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:42:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:04 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 4852788a-1dca-45d5-abe3-a4fe57183f8b (Global Recovery Event) in 10 seconds
Dec  6 04:42:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v34: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 8.0 KiB/s wr, 343 op/s
Dec  6 04:42:05 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e10 new map
Dec  6 04:42:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e10 print_map#012e10#012btime 2025-12-06T09:42:05:044345+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T09:41:29.967778+0000#012modified#0112025-12-06T09:42:00.880325+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24274}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24274 members: 24274#012[mds.cephfs.compute-2.czucwy{0:24274} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.ujokui{-1:14544} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.fpvjgb{-1:24215} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2619956440,v1:192.168.122.101:6805/2619956440] compat {c=[1],r=[1],i=[1fff]}]
Dec  6 04:42:05 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2619956440,v1:192.168.122.101:6805/2619956440] up:standby
Dec  6 04:42:05 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active} 2 up:standby
Dec  6 04:42:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:42:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec  6 04:42:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  6 04:42:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  6 04:42:06 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec  6 04:42:06 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec  6 04:42:06 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.sseuqb-rgw
Dec  6 04:42:06 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.sseuqb-rgw
Dec  6 04:42:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  6 04:42:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  6 04:42:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  6 04:42:06 np0005548915 ceph-mgr[74618]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.sseuqb's ganesha conf is defaulting to empty
Dec  6 04:42:06 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.sseuqb's ganesha conf is defaulting to empty
Dec  6 04:42:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:42:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:42:06 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.sseuqb on compute-2
Dec  6 04:42:06 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.sseuqb on compute-2
Dec  6 04:42:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v35: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 7.0 KiB/s wr, 301 op/s
Dec  6 04:42:07 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  6 04:42:07 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  6 04:42:07 np0005548915 ceph-mon[74327]: Rados config object exists: conf-nfs.cephfs
Dec  6 04:42:07 np0005548915 ceph-mon[74327]: Creating key for client.nfs.cephfs.1.0.compute-2.sseuqb-rgw
Dec  6 04:42:07 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  6 04:42:07 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  6 04:42:07 np0005548915 ceph-mon[74327]: Bind address in nfs.cephfs.1.0.compute-2.sseuqb's ganesha conf is defaulting to empty
Dec  6 04:42:07 np0005548915 ceph-mon[74327]: Deploying daemon nfs.cephfs.1.0.compute-2.sseuqb on compute-2
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:08 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.dfwxck
Dec  6 04:42:08 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.dfwxck
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  6 04:42:08 np0005548915 ceph-mgr[74618]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec  6 04:42:08 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  6 04:42:08 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec  6 04:42:08 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec  6 04:42:08 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.dfwxck-rgw
Dec  6 04:42:08 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.dfwxck-rgw
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  6 04:42:08 np0005548915 ceph-mgr[74618]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.dfwxck's ganesha conf is defaulting to empty
Dec  6 04:42:08 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.dfwxck's ganesha conf is defaulting to empty
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:42:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:42:08 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.dfwxck on compute-0
Dec  6 04:42:08 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.dfwxck on compute-0
Dec  6 04:42:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v36: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 5.7 KiB/s wr, 245 op/s
Dec  6 04:42:09 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 14 completed events
Dec  6 04:42:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:42:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:09 np0005548915 podman[95524]: 2025-12-06 09:42:09.169338658 +0000 UTC m=+0.047063863 container create 009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec  6 04:42:09 np0005548915 systemd[1]: Started libpod-conmon-009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d.scope.
Dec  6 04:42:09 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:42:09 np0005548915 podman[95524]: 2025-12-06 09:42:09.151578162 +0000 UTC m=+0.029303387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:42:09 np0005548915 podman[95524]: 2025-12-06 09:42:09.256970577 +0000 UTC m=+0.134695822 container init 009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_payne, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:42:09 np0005548915 podman[95524]: 2025-12-06 09:42:09.269659388 +0000 UTC m=+0.147384593 container start 009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:42:09 np0005548915 podman[95524]: 2025-12-06 09:42:09.27385025 +0000 UTC m=+0.151575565 container attach 009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_payne, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  6 04:42:09 np0005548915 busy_payne[95541]: 167 167
Dec  6 04:42:09 np0005548915 systemd[1]: libpod-009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d.scope: Deactivated successfully.
Dec  6 04:42:09 np0005548915 podman[95524]: 2025-12-06 09:42:09.277089487 +0000 UTC m=+0.154814702 container died 009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_payne, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:42:09 np0005548915 systemd[1]: var-lib-containers-storage-overlay-80de7da242f5161fc7924c7f28ab500e85b39c14727e8901fe8247af94671aa9-merged.mount: Deactivated successfully.
Dec  6 04:42:09 np0005548915 podman[95524]: 2025-12-06 09:42:09.322115005 +0000 UTC m=+0.199840200 container remove 009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:42:09 np0005548915 systemd[1]: libpod-conmon-009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d.scope: Deactivated successfully.
Dec  6 04:42:09 np0005548915 systemd[1]: Reloading.
Dec  6 04:42:09 np0005548915 ceph-mon[74327]: Creating key for client.nfs.cephfs.2.0.compute-0.dfwxck
Dec  6 04:42:09 np0005548915 ceph-mon[74327]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec  6 04:42:09 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  6 04:42:09 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  6 04:42:09 np0005548915 ceph-mon[74327]: Rados config object exists: conf-nfs.cephfs
Dec  6 04:42:09 np0005548915 ceph-mon[74327]: Creating key for client.nfs.cephfs.2.0.compute-0.dfwxck-rgw
Dec  6 04:42:09 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  6 04:42:09 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  6 04:42:09 np0005548915 ceph-mon[74327]: Bind address in nfs.cephfs.2.0.compute-0.dfwxck's ganesha conf is defaulting to empty
Dec  6 04:42:09 np0005548915 ceph-mon[74327]: Deploying daemon nfs.cephfs.2.0.compute-0.dfwxck on compute-0
Dec  6 04:42:09 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:09 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:42:09 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:42:09 np0005548915 systemd[1]: Reloading.
Dec  6 04:42:09 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:42:09 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:42:09 np0005548915 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:42:10 np0005548915 podman[95682]: 2025-12-06 09:42:10.295149655 +0000 UTC m=+0.048161922 container create f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Dec  6 04:42:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:42:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86bdd0ce43374acaba3604e849843759161821197ac361242f7e120fad089e4/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86bdd0ce43374acaba3604e849843759161821197ac361242f7e120fad089e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86bdd0ce43374acaba3604e849843759161821197ac361242f7e120fad089e4/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86bdd0ce43374acaba3604e849843759161821197ac361242f7e120fad089e4/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:10 np0005548915 podman[95682]: 2025-12-06 09:42:10.275054616 +0000 UTC m=+0.028066863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:42:10 np0005548915 podman[95682]: 2025-12-06 09:42:10.388316513 +0000 UTC m=+0.141328820 container init f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:42:10 np0005548915 podman[95682]: 2025-12-06 09:42:10.398844005 +0000 UTC m=+0.151856272 container start f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:42:10 np0005548915 bash[95682]: f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  6 04:42:10 np0005548915 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  6 04:42:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:42:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:42:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:42:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:10 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 9a6db2cb-2f95-4ec5-a56e-3692847dbc20 (Updating nfs.cephfs deployment (+3 -> 3))
Dec  6 04:42:10 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 9a6db2cb-2f95-4ec5-a56e-3692847dbc20 (Updating nfs.cephfs deployment (+3 -> 3)) in 10 seconds
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:42:10 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:10 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:42:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:10 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev de016995-7e5d-4275-960f-5b2b33bc5989 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec  6 04:42:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Dec  6 04:42:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  6 04:42:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 04:42:10 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.jmdafd on compute-1
Dec  6 04:42:10 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.jmdafd on compute-1
Dec  6 04:42:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v37: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 120 KiB/s rd, 6.0 KiB/s wr, 224 op/s
Dec  6 04:42:11 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:11 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:11 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:11 np0005548915 ceph-mon[74327]: Deploying daemon haproxy.nfs.cephfs.compute-1.jmdafd on compute-1
Dec  6 04:42:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v38: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 3.0 KiB/s wr, 197 op/s
Dec  6 04:42:14 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 15 completed events
Dec  6 04:42:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:42:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:14 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v39: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 4.0 KiB/s wr, 201 op/s
Dec  6 04:42:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:42:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v40: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec  6 04:42:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:42:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:42:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  6 04:42:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:17 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.fzuvue on compute-0
Dec  6 04:42:17 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.fzuvue on compute-0
Dec  6 04:42:18 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:18 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:18 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:18 np0005548915 ceph-mon[74327]: Deploying daemon haproxy.nfs.cephfs.compute-0.fzuvue on compute-0
Dec  6 04:42:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:18 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f924c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v41: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec  6 04:42:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:42:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:20 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v42: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec  6 04:42:21 np0005548915 podman[95842]: 2025-12-06 09:42:21.23411749 +0000 UTC m=+3.216548728 container create a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be (image=quay.io/ceph/haproxy:2.3, name=optimistic_turing)
Dec  6 04:42:21 np0005548915 systemd[1]: Started libpod-conmon-a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be.scope.
Dec  6 04:42:21 np0005548915 podman[95842]: 2025-12-06 09:42:21.211918534 +0000 UTC m=+3.194349812 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  6 04:42:21 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:42:21 np0005548915 podman[95842]: 2025-12-06 09:42:21.341398286 +0000 UTC m=+3.323829554 container init a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be (image=quay.io/ceph/haproxy:2.3, name=optimistic_turing)
Dec  6 04:42:21 np0005548915 podman[95842]: 2025-12-06 09:42:21.355383221 +0000 UTC m=+3.337814469 container start a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be (image=quay.io/ceph/haproxy:2.3, name=optimistic_turing)
Dec  6 04:42:21 np0005548915 podman[95842]: 2025-12-06 09:42:21.358801462 +0000 UTC m=+3.341232690 container attach a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be (image=quay.io/ceph/haproxy:2.3, name=optimistic_turing)
Dec  6 04:42:21 np0005548915 optimistic_turing[95966]: 0 0
Dec  6 04:42:21 np0005548915 systemd[1]: libpod-a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be.scope: Deactivated successfully.
Dec  6 04:42:21 np0005548915 podman[95842]: 2025-12-06 09:42:21.366208291 +0000 UTC m=+3.348639549 container died a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be (image=quay.io/ceph/haproxy:2.3, name=optimistic_turing)
Dec  6 04:42:21 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7539887866d61a7afe3d8aeb8f6af24f2bd3292e84750680226c1a53a2e49768-merged.mount: Deactivated successfully.
Dec  6 04:42:21 np0005548915 podman[95842]: 2025-12-06 09:42:21.453173153 +0000 UTC m=+3.435604421 container remove a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be (image=quay.io/ceph/haproxy:2.3, name=optimistic_turing)
Dec  6 04:42:21 np0005548915 systemd[1]: libpod-conmon-a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be.scope: Deactivated successfully.
Dec  6 04:42:21 np0005548915 systemd[1]: Reloading.
Dec  6 04:42:21 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:42:21 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:42:21 np0005548915 systemd[1]: Reloading.
Dec  6 04:42:21 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:42:21 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:42:22 np0005548915 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.fzuvue for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:42:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:22 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:22 np0005548915 podman[96112]: 2025-12-06 09:42:22.575789585 +0000 UTC m=+0.076452511 container create 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 04:42:22 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693a501dd3650eb7d1c9ee2f1a762126a1db1583eff246bb8100dbca7914988a/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:22 np0005548915 podman[96112]: 2025-12-06 09:42:22.543517069 +0000 UTC m=+0.044180045 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  6 04:42:22 np0005548915 podman[96112]: 2025-12-06 09:42:22.64830391 +0000 UTC m=+0.148966826 container init 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 04:42:22 np0005548915 podman[96112]: 2025-12-06 09:42:22.659584672 +0000 UTC m=+0.160247598 container start 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 04:42:22 np0005548915 bash[96112]: 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d
Dec  6 04:42:22 np0005548915 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.fzuvue for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:42:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [NOTICE] 339/094222 (2) : New worker #1 (4) forked
Dec  6 04:42:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:42:22 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:42:22 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  6 04:42:22 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:22 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:22 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:22 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.voodna on compute-2
Dec  6 04:42:22 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.voodna on compute-2
Dec  6 04:42:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v43: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Dec  6 04:42:23 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:23 np0005548915 ceph-mon[74327]: Deploying daemon haproxy.nfs.cephfs.compute-2.voodna on compute-2
Dec  6 04:42:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v44: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1023 B/s wr, 4 op/s
Dec  6 04:42:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:42:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v45: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:42:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:42:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:42:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  6 04:42:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Dec  6 04:42:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:27 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  6 04:42:27 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  6 04:42:27 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 04:42:27 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 04:42:27 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 04:42:27 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 04:42:27 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.uzbtlt on compute-1
Dec  6 04:42:27 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.uzbtlt on compute-1
Dec  6 04:42:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:28 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:28 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:28 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:28 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:28 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:42:28
Dec  6 04:42:28 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:42:28 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:42:28 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['.rgw.root', '.nfs', 'backups', 'default.rgw.log', 'default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'images']
Dec  6 04:42:28 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:42:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v46: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Dec  6 04:42:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Dec  6 04:42:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:42:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:29 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec  6 04:42:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec  6 04:42:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec  6 04:42:29 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec  6 04:42:29 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev cdc8c502-ca9c-4899-a366-64f1ce8e52db (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec  6 04:42:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec  6 04:42:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:42:29 np0005548915 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  6 04:42:29 np0005548915 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 04:42:29 np0005548915 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 04:42:29 np0005548915 ceph-mon[74327]: Deploying daemon keepalived.nfs.cephfs.compute-1.uzbtlt on compute-1
Dec  6 04:42:29 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Dec  6 04:42:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:30 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:42:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec  6 04:42:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:42:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec  6 04:42:30 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec  6 04:42:30 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev bfa2975c-8764-478e-9bbb-5a32e2b80a95 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  6 04:42:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Dec  6 04:42:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:42:30 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec  6 04:42:30 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:42:30 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:42:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:30 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v49: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec  6 04:42:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec  6 04:42:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Dec  6 04:42:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Dec  6 04:42:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:31 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec  6 04:42:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:42:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:42:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec  6 04:42:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec  6 04:42:31 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec  6 04:42:31 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 60d5fd36-984b-4eab-a302-a71ae27a4250 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  6 04:42:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec  6 04:42:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:42:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:32 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 54 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=54 pruub=12.839952469s) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active pruub 187.441436768s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 54 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=54 pruub=12.839952469s) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown pruub 187.441436768s@ mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec  6 04:42:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:32 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480032d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:42:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec  6 04:42:32 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:42:32 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:32 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Dec  6 04:42:32 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:42:32 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:42:32 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec  6 04:42:32 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.16( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.a( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.15( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.c( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.4( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1c( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1f( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.12( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1d( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.13( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.11( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.17( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.14( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.10( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.b( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.8( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.9( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.e( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.6( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.5( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.7( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.2( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.d( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.f( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.3( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1e( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.18( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1b( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1a( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.16( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.19( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:32 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec  6 04:42:32 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 0c16cd8d-5e6d-4b8a-aca7-6500be337c49 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.c( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.15( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.4( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1f( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.a( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1d( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.12( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.11( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.14( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.10( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.17( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1c( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.8( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.13( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.e( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.9( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec  6 04:42:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.5( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.7( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.0( empty local-lis/les=54/55 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.2( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.d( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1e( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.f( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.6( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.b( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.3( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1b( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1a( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.18( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.19( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec  6 04:42:32 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec  6 04:42:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v52: 182 pgs: 46 unknown, 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:33 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec  6 04:42:33 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 1bc71b19-98e7-4226-b8eb-91d69d843741 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec  6 04:42:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:42:33 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec  6 04:42:33 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec  6 04:42:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:34 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:34 np0005548915 ceph-mgr[74618]: [progress WARNING root] Starting Global Recovery Event,108 pgs not in active + clean state
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:34 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 04:42:34 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 04:42:34 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  6 04:42:34 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  6 04:42:34 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 04:42:34 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 04:42:34 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.ylrrzf on compute-0
Dec  6 04:42:34 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.ylrrzf on compute-0
Dec  6 04:42:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:34 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec  6 04:42:34 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 4397fc66-c55d-479e-ab63-e1f82d644844 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec  6 04:42:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:42:34 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec  6 04:42:34 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v55: 244 pgs: 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  6 04:42:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:35 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480032d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.4( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953510284s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057052612s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.16( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.949063301s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.052627563s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.a( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953651428s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057296753s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.4( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953392982s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057052612s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.16( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.948952675s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.052627563s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.a( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953621864s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057296753s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1d( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953528404s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057434082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.13( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953548431s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057525635s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1d( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953459740s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057434082s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.13( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953528404s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057525635s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.10( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953630447s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057693481s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.10( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953598022s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057693481s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.11( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953407288s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057586670s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.11( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953390121s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057586670s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.14( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953093529s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057647705s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1f( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.952907562s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057083130s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.b( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.957175255s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.061782837s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.14( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953067780s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057647705s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.b( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.957158089s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.061782837s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.e( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.952960014s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057785034s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1f( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.952484131s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057083130s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.8( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.952932358s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057769775s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.e( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.952944756s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057785034s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.8( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.952902794s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057769775s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.6( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.956356049s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.061447144s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.6( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.956339836s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.061447144s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.5( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955930710s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.061218262s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.5( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955874443s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.061218262s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.2( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955976486s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.061584473s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.3( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.956527710s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.062194824s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.2( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955955505s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.061584473s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.3( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.956453323s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.062194824s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.f( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955749512s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.061660767s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.f( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955728531s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.061660767s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.9( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.951803207s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057800293s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.9( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.951780319s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057800293s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.18( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.956185341s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.062377930s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1e( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955442429s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.061645508s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.18( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.956168175s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.062377930s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[10.0( v 51'1027 (0'0,51'1027] local-lis/les=45/46 n=178 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=14.998137474s) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 51'1026 mlcod 51'1026 active pruub 193.104400635s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1e( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955418587s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.061645508s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1b( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955781937s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.062271118s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1b( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955761909s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.062271118s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev ae258c83-ccb1-42e3-8a21-9cea967bd3ac (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev cdc8c502-ca9c-4899-a366-64f1ce8e52db (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event cdc8c502-ca9c-4899-a366-64f1ce8e52db (PG autoscaler increasing pool 6 PGs from 1 to 16) in 6 seconds
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev bfa2975c-8764-478e-9bbb-5a32e2b80a95 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event bfa2975c-8764-478e-9bbb-5a32e2b80a95 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 5 seconds
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 60d5fd36-984b-4eab-a302-a71ae27a4250 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 60d5fd36-984b-4eab-a302-a71ae27a4250 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 0c16cd8d-5e6d-4b8a-aca7-6500be337c49 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 0c16cd8d-5e6d-4b8a-aca7-6500be337c49 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 1bc71b19-98e7-4226-b8eb-91d69d843741 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 1bc71b19-98e7-4226-b8eb-91d69d843741 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 4397fc66-c55d-479e-ab63-e1f82d644844 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 4397fc66-c55d-479e-ab63-e1f82d644844 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev ae258c83-ccb1-42e3-8a21-9cea967bd3ac (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec  6 04:42:35 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event ae258c83-ccb1-42e3-8a21-9cea967bd3ac (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: Deploying daemon keepalived.nfs.cephfs.compute-0.ylrrzf on compute-0
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:35 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.14( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.10( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.11( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.8( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.e( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.a( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.6( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.1b( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.4( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.19( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.12( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.12( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.10( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.17( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.18( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[10.0( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=14.998137474s) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 51'1026 mlcod 0'0 unknown pruub 193.104400635s@ mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa79f68 space 0x55fcdf8f2eb0 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa79ce8 space 0x55fcdf36e9d0 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa62e88 space 0x55fcdf5b0d10 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa49748 space 0x55fcdf9bb6d0 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5e2a8 space 0x55fcdf9bba10 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdf7dee88 space 0x55fcdf5b0aa0 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa91c48 space 0x55fcdf9ec760 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5f428 space 0x55fcdf9bbae0 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5e348 space 0x55fcdf39f600 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5f1a8 space 0x55fcdf9ec0e0 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa62668 space 0x55fcdf5b0f80 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5fec8 space 0x55fcdf9ec5c0 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa91ba8 space 0x55fcdf9ecde0 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa78168 space 0x55fcdf9bb600 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5fa68 space 0x55fcdf9bb940 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa63a68 space 0x55fcdfad7870 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa90a28 space 0x55fcdf9ed390 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5f928 space 0x55fcdf9ed1f0 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa62ca8 space 0x55fcdf5b0de0 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa40028 space 0x55fcdf5b09d0 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdf3a00c8 space 0x55fcdfad7ae0 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5e3e8 space 0x55fcdf9ec350 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa79d88 space 0x55fcdf5b0b70 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdede5568 space 0x55fcdf9bb530 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa63248 space 0x55fcdf9ec900 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5f888 space 0x55fcdfad7530 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa417e8 space 0x55fcdf998de0 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa91608 space 0x55fcdf9ec690 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa48988 space 0x55fcdf7ef6d0 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa63748 space 0x55fcdf5b0eb0 0x0~1000 clean)
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec  6 04:42:35 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec  6 04:42:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:36 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:36 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec  6 04:42:36 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  6 04:42:36 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:42:36 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:42:36 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:42:36 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:42:36 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:42:36 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  6 04:42:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec  6 04:42:36 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1b( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.18( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.11( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.7( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.9( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.12( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.10( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1f( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1e( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1d( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1c( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1a( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.19( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.6( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.5( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.4( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.b( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.3( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.8( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.d( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.a( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.c( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.e( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.f( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.2( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.13( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.14( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.15( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.17( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.16( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.e( v 44'12 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.14( v 51'44 (0'0,51'44] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.8( v 51'44 (0'0,51'44] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.11( v 44'12 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.10( v 57'45 lc 51'14 (0'0,57'45] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=57'45 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.12( v 44'12 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.15( v 44'12 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.d( v 44'12 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.19( v 51'44 (0'0,51'44] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.f( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.a( v 44'12 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.12( v 51'44 (0'0,51'44] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.18( v 51'44 lc 51'18 (0'0,51'44] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.1b( v 51'44 lc 51'8 (0'0,51'44] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.4( v 51'44 (0'0,51'44] local-lis/les=58/59 n=1 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.6( v 44'12 lc 44'8 (0'0,44'12] local-lis/les=58/59 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.5( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.4( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.0( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 51'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.17( v 51'44 (0'0,51'44] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.10( v 44'12 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:37 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v58: 306 pgs: 62 unknown, 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 04:42:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec  6 04:42:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:37 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:37 np0005548915 podman[96233]: 2025-12-06 09:42:37.653237268 +0000 UTC m=+2.792303254 container create 536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_pike, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git)
Dec  6 04:42:37 np0005548915 podman[96233]: 2025-12-06 09:42:37.63837918 +0000 UTC m=+2.777445196 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  6 04:42:37 np0005548915 systemd[1]: Started libpod-conmon-536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990.scope.
Dec  6 04:42:37 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:42:37 np0005548915 podman[96233]: 2025-12-06 09:42:37.751122043 +0000 UTC m=+2.890188129 container init 536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_pike, name=keepalived, distribution-scope=public, io.buildah.version=1.28.2, release=1793, version=2.2.4, vendor=Red Hat, Inc., architecture=x86_64, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec  6 04:42:37 np0005548915 podman[96233]: 2025-12-06 09:42:37.765739114 +0000 UTC m=+2.904805130 container start 536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_pike, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, release=1793)
Dec  6 04:42:37 np0005548915 podman[96233]: 2025-12-06 09:42:37.770131182 +0000 UTC m=+2.909197258 container attach 536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_pike, description=keepalived for Ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, distribution-scope=public, architecture=x86_64, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc.)
Dec  6 04:42:37 np0005548915 sleepy_pike[96329]: 0 0
Dec  6 04:42:37 np0005548915 systemd[1]: libpod-536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990.scope: Deactivated successfully.
Dec  6 04:42:37 np0005548915 podman[96233]: 2025-12-06 09:42:37.774871469 +0000 UTC m=+2.913937475 container died 536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_pike, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.tags=Ceph keepalived, architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, name=keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container)
Dec  6 04:42:37 np0005548915 systemd[1]: var-lib-containers-storage-overlay-3c7f203e7f23d4a74df742a6178e9bec97dd0ab9ef690e177861e2eb1170b30e-merged.mount: Deactivated successfully.
Dec  6 04:42:37 np0005548915 podman[96233]: 2025-12-06 09:42:37.835430643 +0000 UTC m=+2.974496659 container remove 536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_pike, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, vcs-type=git, name=keepalived, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64)
Dec  6 04:42:37 np0005548915 systemd[1]: libpod-conmon-536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990.scope: Deactivated successfully.
Dec  6 04:42:37 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.17 deep-scrub starts
Dec  6 04:42:37 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.17 deep-scrub ok
Dec  6 04:42:37 np0005548915 systemd[1]: Reloading.
Dec  6 04:42:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec  6 04:42:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:42:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec  6 04:42:37 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec  6 04:42:37 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 60 pg[12.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=60 pruub=8.863116264s) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active pruub 189.172805786s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:37 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 60 pg[12.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=60 pruub=8.863116264s) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown pruub 189.172805786s@ mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:38 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:38 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:42:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:38 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:38 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:42:38 np0005548915 systemd[1]: Reloading.
Dec  6 04:42:38 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:42:38 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:42:38 np0005548915 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.ylrrzf for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:42:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:38 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:38 np0005548915 podman[96477]: 2025-12-06 09:42:38.838195271 +0000 UTC m=+0.063274778 container create d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, distribution-scope=public, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, version=2.2.4)
Dec  6 04:42:38 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.0 deep-scrub starts
Dec  6 04:42:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e6fb7e0d8db1de1a51ec46a29d871d2c8acb20ef652492c70bda017a34640e/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:38 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.0 deep-scrub ok
Dec  6 04:42:38 np0005548915 podman[96477]: 2025-12-06 09:42:38.907172891 +0000 UTC m=+0.132252488 container init d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, version=2.2.4, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec  6 04:42:38 np0005548915 podman[96477]: 2025-12-06 09:42:38.817991789 +0000 UTC m=+0.043071336 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  6 04:42:38 np0005548915 podman[96477]: 2025-12-06 09:42:38.916898781 +0000 UTC m=+0.141978308 container start d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.buildah.version=1.28.2, version=2.2.4, com.redhat.component=keepalived-container, architecture=x86_64)
Dec  6 04:42:38 np0005548915 bash[96477]: d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f
Dec  6 04:42:38 np0005548915 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.ylrrzf for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:42:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec  6 04:42:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec  6 04:42:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec  6 04:42:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: Configuration file /etc/keepalived/keepalived.conf
Dec  6 04:42:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec  6 04:42:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: Starting VRRP child process, pid=4
Dec  6 04:42:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: Startup complete
Dec  6 04:42:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: (VI_0) Entering BACKUP STATE (init)
Dec  6 04:42:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: VRRP_Script(check_backend) succeeded
Dec  6 04:42:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec  6 04:42:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:42:39 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v60: 337 pgs: 93 unknown, 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 725 B/s rd, 0 op/s
Dec  6 04:42:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec  6 04:42:39 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.11( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.13( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.12( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.15( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.10( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.4( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.6( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.9( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.8( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.a( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.c( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.b( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.e( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.d( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.5( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.2( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.3( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1f( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1c( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1a( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1b( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.18( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.19( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.16( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.14( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.f( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.7( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1e( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1d( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.17( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.11( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.15( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.12( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.13( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.10( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.9( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.4( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.a( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.c( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.8( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.b( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.e( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.5( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.2( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.d( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.0( empty local-lis/les=60/61 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.3( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.6( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1a( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1f( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1c( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1b( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.16( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.14( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.19( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.7( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.f( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.18( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1e( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.17( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1d( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:39 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 22 completed events
Dec  6 04:42:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:42:39 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 04:42:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:42:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:39 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  6 04:42:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:39 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 04:42:39 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 04:42:39 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 04:42:39 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 04:42:39 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  6 04:42:39 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  6 04:42:39 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.whsrlg on compute-2
Dec  6 04:42:39 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.whsrlg on compute-2
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec  6 04:42:39 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec  6 04:42:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:40 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:40 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:40 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:40 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:40 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:40 np0005548915 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 04:42:40 np0005548915 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 04:42:40 np0005548915 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  6 04:42:40 np0005548915 ceph-mon[74327]: Deploying daemon keepalived.nfs.cephfs.compute-2.whsrlg on compute-2
Dec  6 04:42:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:42:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:40 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:40 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec  6 04:42:40 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec  6 04:42:41 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v62: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 63 B/s, 1 keys/s, 3 objects/s recovering
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:42:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:41 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.11( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.885075569s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.367706299s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.11( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.885017395s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.367706299s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.828448296s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.311187744s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.828311920s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.311187744s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.13( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887252808s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.370223999s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.13( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887207031s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.370223999s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.828056335s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.311126709s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.828003883s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.311126709s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.12( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886998177s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.370239258s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.12( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886981964s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.370239258s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.827672958s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.311065674s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.827654839s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.311065674s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.4( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887239456s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.370712280s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.10( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886943817s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.370452881s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.4( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887214661s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.370712280s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.10( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886919975s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.370452881s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.6( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887412071s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371078491s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.6( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887392998s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371078491s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.827157974s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.311004639s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.8( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887278557s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371185303s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.827139854s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.311004639s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.a( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887115479s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371124268s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.8( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887250900s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371185303s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.a( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887094498s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371124268s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.9( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886641502s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.370697021s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.9( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886605263s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.370697021s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.c( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886935234s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371154785s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.c( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886913300s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371154785s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.826405525s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310913086s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.826385498s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310913086s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.e( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886591911s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371231079s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.b( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886525154s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371200562s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.e( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886573792s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371231079s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.b( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886501312s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371200562s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.826035500s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310882568s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.826019287s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310882568s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.825989723s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310928345s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.825959206s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310928345s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.2( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886281967s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371292114s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.3( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886356354s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371398926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.3( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886339188s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371398926s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.5( v 61'1030 (0'0,61'1030] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.825742722s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=59'1028 lcod 59'1029 mlcod 59'1029 active pruub 195.310867310s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.5( v 61'1030 (0'0,61'1030] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.825701714s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=59'1028 lcod 59'1029 mlcod 0'0 unknown NOTIFY pruub 195.310867310s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.825399399s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310806274s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.2( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886258125s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371292114s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1c( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886066437s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371490479s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.825379372s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310806274s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1c( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886047363s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371490479s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1a( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.885918617s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371459961s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1a( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.885900497s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371459961s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824880600s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310684204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.18( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.891217232s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.377014160s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824854851s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310684204s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.18( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.891183853s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.377014160s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824689865s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310638428s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824667931s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310638428s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.19( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.890976906s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.376953125s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824485779s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310668945s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.19( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.890837669s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.376953125s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824462891s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310668945s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.7( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.890755653s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.376983643s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.7( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.890736580s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.376983643s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824155807s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310455322s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824110031s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310455322s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.820496559s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.306900024s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.820478439s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.306900024s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1e( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.890796661s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.377243042s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1e( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.890778542s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.377243042s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1d( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.891049385s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.377578735s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.823896408s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310440063s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1d( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.891033173s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.377578735s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.823879242s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310440063s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.820156097s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.306838989s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.820139885s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.306838989s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.17( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.889799118s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.377243042s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.17( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.889700890s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.377243042s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  6 04:42:41 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[6.6( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[6.2( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[6.e( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.7( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[6.a( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.1d( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.5( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec  6 04:42:41 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec  6 04:42:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:42 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230002f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec  6 04:42:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec  6 04:42:42 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.5( v 61'1030 (0'0,61'1030] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=59'1028 lcod 59'1029 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.5( v 61'1030 (0'0,61'1030] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=59'1028 lcod 59'1029 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:42 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  6 04:42:42 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:42:42 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  6 04:42:42 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.1a( v 48'48 (0'0,48'48] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.1e( v 48'48 (0'0,48'48] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.1c( v 48'48 (0'0,48'48] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.7( v 48'48 (0'0,48'48] local-lis/les=62/63 n=1 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[6.a( v 50'39 (0'0,50'39] local-lis/les=62/63 n=1 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=50'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.1b( v 48'48 (0'0,48'48] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.4( v 48'48 (0'0,48'48] local-lis/les=62/63 n=1 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.5( v 48'48 (0'0,48'48] local-lis/les=62/63 n=1 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.f( v 48'48 (0'0,48'48] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[6.2( v 50'39 (0'0,50'39] local-lis/les=62/63 n=2 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=50'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.1d( v 48'48 (0'0,48'48] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.1( v 48'48 (0'0,48'48] local-lis/les=62/63 n=1 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.12( v 48'48 (0'0,48'48] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[6.e( v 50'39 lc 48'19 (0'0,50'39] local-lis/les=62/63 n=1 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=50'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.14( v 61'51 lc 48'43 (0'0,61'51] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=61'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[6.6( v 50'39 lc 0'0 (0'0,50'39] local-lis/les=62/63 n=1 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=50'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:42 np0005548915 python3[96527]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:42:42 np0005548915 podman[96528]: 2025-12-06 09:42:42.437783489 +0000 UTC m=+0.043319563 container create 0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d (image=quay.io/ceph/ceph:v19, name=silly_noether, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 04:42:42 np0005548915 systemd[1]: Started libpod-conmon-0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d.scope.
Dec  6 04:42:42 np0005548915 podman[96528]: 2025-12-06 09:42:42.420080155 +0000 UTC m=+0.025616249 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:42:42 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:42:42 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a2c6db28623a256a659c78dd640bed5d1bdc5318f86bb4efb5d886f330cc9d3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:42 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a2c6db28623a256a659c78dd640bed5d1bdc5318f86bb4efb5d886f330cc9d3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:42 np0005548915 podman[96528]: 2025-12-06 09:42:42.556231225 +0000 UTC m=+0.161767299 container init 0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d (image=quay.io/ceph/ceph:v19, name=silly_noether, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:42:42 np0005548915 podman[96528]: 2025-12-06 09:42:42.564416885 +0000 UTC m=+0.169952959 container start 0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d (image=quay.io/ceph/ceph:v19, name=silly_noether, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:42:42 np0005548915 podman[96528]: 2025-12-06 09:42:42.56869778 +0000 UTC m=+0.174233854 container attach 0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d (image=quay.io/ceph/ceph:v19, name=silly_noether, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:42:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:42 2025: (VI_0) Entering MASTER STATE
Dec  6 04:42:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:42 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.19 deep-scrub starts
Dec  6 04:42:42 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.19 deep-scrub ok
Dec  6 04:42:43 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v65: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 1 keys/s, 3 objects/s recovering
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  6 04:42:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:43 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[6.b( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[6.f( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=58/58 les/c/f=59/60/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[6.7( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[6.3( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.789395332s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.311187744s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.789361000s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.311187744s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.788603783s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.311019897s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.788549423s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.311019897s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.788349152s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.311004639s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.788297653s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.311004639s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.787719727s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310928345s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.787698746s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310928345s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.786976814s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310775757s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.786928177s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310775757s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.786486626s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310760498s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.786152840s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310668945s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.786112785s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310668945s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.786048889s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310760498s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.785719872s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310623169s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.785683632s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310623169s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.5( v 61'1030 (0'0,61'1030] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=61'1030 lcod 59'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  6 04:42:43 np0005548915 silly_noether[96543]: could not fetch user info: no user info saved
Dec  6 04:42:43 np0005548915 systemd[1]: libpod-0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d.scope: Deactivated successfully.
Dec  6 04:42:43 np0005548915 conmon[96543]: conmon 0d3d6bdb46ceb7f67e9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d.scope/container/memory.events
Dec  6 04:42:43 np0005548915 podman[96528]: 2025-12-06 09:42:43.338392637 +0000 UTC m=+0.943928721 container died 0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d (image=quay.io/ceph/ceph:v19, name=silly_noether, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:42:43 np0005548915 systemd[1]: var-lib-containers-storage-overlay-9a2c6db28623a256a659c78dd640bed5d1bdc5318f86bb4efb5d886f330cc9d3-merged.mount: Deactivated successfully.
Dec  6 04:42:43 np0005548915 podman[96528]: 2025-12-06 09:42:43.390883385 +0000 UTC m=+0.996419469 container remove 0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d (image=quay.io/ceph/ceph:v19, name=silly_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  6 04:42:43 np0005548915 systemd[1]: libpod-conmon-0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d.scope: Deactivated successfully.
Dec  6 04:42:43 np0005548915 python3[96666]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:43 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev de016995-7e5d-4275-960f-5b2b33bc5989 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec  6 04:42:43 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event de016995-7e5d-4275-960f-5b2b33bc5989 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 33 seconds
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  6 04:42:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:43 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev c64c0c91-5f30-4161-a910-ead2e7fb7a40 (Updating alertmanager deployment (+1 -> 1))
Dec  6 04:42:43 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Dec  6 04:42:43 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Dec  6 04:42:43 np0005548915 podman[96667]: 2025-12-06 09:42:43.855067502 +0000 UTC m=+0.072364792 container create bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2 (image=quay.io/ceph/ceph:v19, name=gracious_blackwell, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  6 04:42:43 np0005548915 systemd[1]: Started libpod-conmon-bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2.scope.
Dec  6 04:42:43 np0005548915 podman[96667]: 2025-12-06 09:42:43.829033084 +0000 UTC m=+0.046330414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:42:43 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:42:43 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07861bd24af8f284c0de9b8507b945a64416f6028d60acd26b2f9e8825ddd79/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:43 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07861bd24af8f284c0de9b8507b945a64416f6028d60acd26b2f9e8825ddd79/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:43 np0005548915 podman[96667]: 2025-12-06 09:42:43.956711957 +0000 UTC m=+0.174009277 container init bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2 (image=quay.io/ceph/ceph:v19, name=gracious_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 04:42:43 np0005548915 podman[96667]: 2025-12-06 09:42:43.969293305 +0000 UTC m=+0.186590595 container start bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2 (image=quay.io/ceph/ceph:v19, name=gracious_blackwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 04:42:43 np0005548915 podman[96667]: 2025-12-06 09:42:43.974798513 +0000 UTC m=+0.192095833 container attach bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2 (image=quay.io/ceph/ceph:v19, name=gracious_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:42:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:44 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec  6 04:42:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec  6 04:42:44 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 23 completed events
Dec  6 04:42:44 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec  6 04:42:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.009078026s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.531784058s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.008981705s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.531784058s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.009040833s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.531906128s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.008938789s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.531906128s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.009285927s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.531967163s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.008082390s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.531616211s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.008004189s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.531616211s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.008171082s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.531967163s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.007225990s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.531600952s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.007121086s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.531600952s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.007177353s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.531723022s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.006881714s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.531723022s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.5( v 64'1034 (0'0,64'1034] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.002076149s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=61'1030 lcod 64'1033 mlcod 64'1033 active pruub 201.527008057s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.5( v 64'1034 (0'0,64'1034] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.001951218s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=61'1030 lcod 64'1033 mlcod 0'0 unknown NOTIFY pruub 201.527008057s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.001643181s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.527099609s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.001572609s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.527099609s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.002022743s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.527618408s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.000745773s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.526901245s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.001730919s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.527618408s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.000676155s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.526901245s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.000605583s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.527404785s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.998142242s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.525177002s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.000528336s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.527404785s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.998067856s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.525177002s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.999908447s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.527114868s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.999868393s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.527114868s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.999295235s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.526809692s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.999224663s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.526809692s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.999085426s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.527191162s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.998382568s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.526733398s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.998795509s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.527191162s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.998186111s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.526733398s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[6.3( v 50'39 lc 0'0 (0'0,50'39] local-lis/les=64/65 n=2 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=50'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[6.7( v 50'39 lc 48'20 (0'0,50'39] local-lis/les=64/65 n=1 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=50'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[6.f( v 50'39 lc 48'1 (0'0,50'39] local-lis/les=64/65 n=3 ec=54/21 lis/c=58/58 les/c/f=59/60/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=50'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:44 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[6.b( v 50'39 lc 0'0 (0'0,50'39] local-lis/les=64/65 n=1 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=50'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:44 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 40220b6f-6097-4335-9da7-9e13df932a5c (Global Recovery Event) in 10 seconds
Dec  6 04:42:44 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:44 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:44 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:44 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:44 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:44 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:45 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v68: 337 pgs: 8 remapped+peering, 16 peering, 1 active+recovering, 312 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 3 op/s; 1/226 objects misplaced (0.442%); 722 B/s, 2 keys/s, 23 objects/s recovering
Dec  6 04:42:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:45 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec  6 04:42:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec  6 04:42:45 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec  6 04:42:45 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:45 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:45 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:45 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:45 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:45 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:45 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:45 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:45 np0005548915 ceph-mon[74327]: Deploying daemon alertmanager.compute-0 on compute-0
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]: {
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "user_id": "openstack",
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "display_name": "openstack",
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "email": "",
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "suspended": 0,
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "max_buckets": 1000,
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "subusers": [],
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "keys": [
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:        {
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:            "user": "openstack",
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:            "access_key": "Y0BEIM7RZZC67P1B4QTT",
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:            "secret_key": "QWkZChaKG8LtAwCXnQ83vi9JO4rkOzAfCx5grxQK",
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:            "active": true,
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:            "create_date": "2025-12-06T09:42:45.291408Z"
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:        }
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    ],
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "swift_keys": [],
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "caps": [],
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "op_mask": "read, write, delete",
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "default_placement": "",
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "default_storage_class": "",
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "placement_tags": [],
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "bucket_quota": {
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:        "enabled": false,
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:        "check_on_raw": false,
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:        "max_size": -1,
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:        "max_size_kb": 0,
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:        "max_objects": -1
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    },
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "user_quota": {
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:        "enabled": false,
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:        "check_on_raw": false,
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:        "max_size": -1,
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:        "max_size_kb": 0,
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:        "max_objects": -1
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    },
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "temp_url_keys": [],
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "type": "rgw",
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "mfa_ids": [],
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "account_id": "",
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "path": "/",
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "create_date": "2025-12-06T09:42:45.290435Z",
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "tags": [],
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]:    "group_ids": []
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]: }
Dec  6 04:42:45 np0005548915 gracious_blackwell[96705]: 
Dec  6 04:42:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:42:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec  6 04:42:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec  6 04:42:45 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec  6 04:42:45 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 67 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=67 pruub=15.884645462s) [0] async=[0] r=-1 lpr=67 pi=[58,67)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.544784546s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:45 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 67 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=67 pruub=15.884576797s) [0] r=-1 lpr=67 pi=[58,67)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.544784546s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:45 np0005548915 systemd[1]: libpod-bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2.scope: Deactivated successfully.
Dec  6 04:42:45 np0005548915 podman[96667]: 2025-12-06 09:42:45.967743441 +0000 UTC m=+2.185040721 container died bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2 (image=quay.io/ceph/ceph:v19, name=gracious_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  6 04:42:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:46 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:46 np0005548915 systemd[1]: var-lib-containers-storage-overlay-e07861bd24af8f284c0de9b8507b945a64416f6028d60acd26b2f9e8825ddd79-merged.mount: Deactivated successfully.
Dec  6 04:42:46 np0005548915 podman[96667]: 2025-12-06 09:42:46.293664819 +0000 UTC m=+2.510962099 container remove bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2 (image=quay.io/ceph/ceph:v19, name=gracious_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:42:46 np0005548915 systemd[1]: libpod-conmon-bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2.scope: Deactivated successfully.
Dec  6 04:42:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec  6 04:42:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec  6 04:42:46 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec  6 04:42:46 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.878108978s) [0] async=[0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.548782349s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:46 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.877959251s) [0] async=[0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.548721313s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:46 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.878348351s) [0] async=[0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.549118042s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:46 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.878028870s) [0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.548782349s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:46 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.878499985s) [0] async=[0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.549407959s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:46 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.877865791s) [0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.548721313s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:46 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.878448486s) [0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.549407959s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:46 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.878067017s) [0] async=[0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.549087524s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:46 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.878017426s) [0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.549087524s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:46 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.877964020s) [0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.549118042s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:46 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.877978325s) [0] async=[0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.549209595s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:46 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.877589226s) [0] async=[0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.548934937s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:46 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.877914429s) [0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.549209595s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:46 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.877550125s) [0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.548934937s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:46 np0005548915 podman[96854]: 2025-12-06 09:42:46.360200624 +0000 UTC m=+1.920910748 volume create 55bf1e0cfc98ad90888b42fa4ce9bd26d0941c436cb72af7b9d3cb62ff298b73
Dec  6 04:42:46 np0005548915 podman[96854]: 2025-12-06 09:42:46.368348832 +0000 UTC m=+1.929058956 container create 4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:42:46 np0005548915 podman[96854]: 2025-12-06 09:42:46.338349918 +0000 UTC m=+1.899060122 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  6 04:42:46 np0005548915 systemd[1]: Started libpod-conmon-4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575.scope.
Dec  6 04:42:46 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:42:46 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d491c645902ec7875796dbf6576d3bd2d10093445a7eeec1f1abef0ca1976926/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:46 np0005548915 podman[96854]: 2025-12-06 09:42:46.45401817 +0000 UTC m=+2.014728304 container init 4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:42:46 np0005548915 podman[96854]: 2025-12-06 09:42:46.460642427 +0000 UTC m=+2.021352551 container start 4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:42:46 np0005548915 vibrant_bhabha[97017]: 65534 65534
Dec  6 04:42:46 np0005548915 systemd[1]: libpod-4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575.scope: Deactivated successfully.
Dec  6 04:42:46 np0005548915 podman[96854]: 2025-12-06 09:42:46.466546845 +0000 UTC m=+2.027256969 container attach 4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:42:46 np0005548915 podman[96854]: 2025-12-06 09:42:46.467208893 +0000 UTC m=+2.027919077 container died 4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:42:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:46 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:46 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d491c645902ec7875796dbf6576d3bd2d10093445a7eeec1f1abef0ca1976926-merged.mount: Deactivated successfully.
Dec  6 04:42:46 np0005548915 podman[96854]: 2025-12-06 09:42:46.900110341 +0000 UTC m=+2.460820465 container remove 4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:42:46 np0005548915 podman[96854]: 2025-12-06 09:42:46.905937237 +0000 UTC m=+2.466647431 volume remove 55bf1e0cfc98ad90888b42fa4ce9bd26d0941c436cb72af7b9d3cb62ff298b73
Dec  6 04:42:46 np0005548915 systemd[1]: libpod-conmon-4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575.scope: Deactivated successfully.
Dec  6 04:42:47 np0005548915 podman[97058]: 2025-12-06 09:42:47.00488526 +0000 UTC m=+0.062972379 volume create 0e5088438877dddb2afc78eb30ca20bf07027d633261d26e8773c98535dd080e
Dec  6 04:42:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v72: 337 pgs: 8 remapped+peering, 16 peering, 1 active+recovering, 312 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 3 op/s; 1/226 objects misplaced (0.442%); 757 B/s, 2 keys/s, 24 objects/s recovering
Dec  6 04:42:47 np0005548915 podman[97058]: 2025-12-06 09:42:47.022013299 +0000 UTC m=+0.080100378 container create dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_chatterjee, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:42:47 np0005548915 python3[97056]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:42:47 np0005548915 systemd[1]: Started libpod-conmon-dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324.scope.
Dec  6 04:42:47 np0005548915 podman[97058]: 2025-12-06 09:42:46.986079785 +0000 UTC m=+0.044166904 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  6 04:42:47 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:42:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/603cea872761c38c779da728e65dac381864c16a1e05c3dd744cf4f1a8953f17/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:47 np0005548915 podman[97058]: 2025-12-06 09:42:47.121514768 +0000 UTC m=+0.179601857 container init dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_chatterjee, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:42:47 np0005548915 podman[97058]: 2025-12-06 09:42:47.128085333 +0000 UTC m=+0.186172462 container start dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_chatterjee, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:42:47 np0005548915 happy_chatterjee[97076]: 65534 65534
Dec  6 04:42:47 np0005548915 systemd[1]: libpod-dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324.scope: Deactivated successfully.
Dec  6 04:42:47 np0005548915 podman[97058]: 2025-12-06 09:42:47.13316056 +0000 UTC m=+0.191247659 container attach dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_chatterjee, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:42:47 np0005548915 podman[97058]: 2025-12-06 09:42:47.133956441 +0000 UTC m=+0.192043560 container died dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_chatterjee, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:42:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:47 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:47 np0005548915 systemd[1]: var-lib-containers-storage-overlay-603cea872761c38c779da728e65dac381864c16a1e05c3dd744cf4f1a8953f17-merged.mount: Deactivated successfully.
Dec  6 04:42:47 np0005548915 podman[97058]: 2025-12-06 09:42:47.197759182 +0000 UTC m=+0.255846311 container remove dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_chatterjee, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:42:47 np0005548915 podman[97058]: 2025-12-06 09:42:47.203831965 +0000 UTC m=+0.261919134 volume remove 0e5088438877dddb2afc78eb30ca20bf07027d633261d26e8773c98535dd080e
Dec  6 04:42:47 np0005548915 ceph-mgr[74618]: [dashboard INFO request] [192.168.122.100:43702] [GET] [200] [0.129s] [6.3K] [1d75e518-79d6-4695-9ca7-e976e7bffe43] /
Dec  6 04:42:47 np0005548915 systemd[1]: libpod-conmon-dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324.scope: Deactivated successfully.
Dec  6 04:42:47 np0005548915 systemd[1]: Reloading.
Dec  6 04:42:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec  6 04:42:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec  6 04:42:47 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec  6 04:42:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:47 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Dec  6 04:42:47 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:42:47 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:42:47 np0005548915 python3[97152]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:42:47 np0005548915 ceph-mgr[74618]: [dashboard INFO request] [192.168.122.100:43712] [GET] [200] [0.003s] [6.3K] [a77a6587-a07c-471d-8155-1790bf33a6b0] /
Dec  6 04:42:47 np0005548915 systemd[1]: Reloading.
Dec  6 04:42:47 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:42:47 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:42:47 np0005548915 systemd[1]: Starting Ceph alertmanager.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:42:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:48 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:48 np0005548915 podman[97244]: 2025-12-06 09:42:48.316282593 +0000 UTC m=+0.059299671 volume create cc9140d1b399a34df664d17bf3d5da457ec5a14a1279788aa2852185673a3bfd
Dec  6 04:42:48 np0005548915 podman[97244]: 2025-12-06 09:42:48.329363744 +0000 UTC m=+0.072380802 container create b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:42:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb2a73ca3b14a2c20beb30faadb6ace12cd5adb72f156644e5801ee5b84b2c3c/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb2a73ca3b14a2c20beb30faadb6ace12cd5adb72f156644e5801ee5b84b2c3c/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:48 np0005548915 podman[97244]: 2025-12-06 09:42:48.299734509 +0000 UTC m=+0.042751607 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  6 04:42:48 np0005548915 podman[97244]: 2025-12-06 09:42:48.410835419 +0000 UTC m=+0.153852497 container init b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:42:48 np0005548915 podman[97244]: 2025-12-06 09:42:48.418300679 +0000 UTC m=+0.161317737 container start b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:42:48 np0005548915 bash[97244]: b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b
Dec  6 04:42:48 np0005548915 systemd[1]: Started Ceph alertmanager.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:42:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.462Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec  6 04:42:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.462Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec  6 04:42:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.476Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec  6 04:42:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.478Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:42:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.525Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  6 04:42:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.527Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  6 04:42:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.533Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec  6 04:42:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.533Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:48 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev c64c0c91-5f30-4161-a910-ead2e7fb7a40 (Updating alertmanager deployment (+1 -> 1))
Dec  6 04:42:48 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event c64c0c91-5f30-4161-a910-ead2e7fb7a40 (Updating alertmanager deployment (+1 -> 1)) in 5 seconds
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec  6 04:42:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:48 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:48 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 0b16bde7-b1bb-4174-ba29-7d221cc5d567 (Updating grafana deployment (+1 -> 1))
Dec  6 04:42:48 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Dec  6 04:42:48 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec  6 04:42:48 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Dec  6 04:42:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:48 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Dec  6 04:42:48 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Dec  6 04:42:48 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec  6 04:42:49 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec  6 04:42:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v74: 337 pgs: 8 remapped+peering, 16 peering, 1 active+recovering, 312 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 1/226 objects misplaced (0.442%)
Dec  6 04:42:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:49 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:49 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 25 completed events
Dec  6 04:42:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:42:50 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec  6 04:42:50 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec  6 04:42:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:50 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:50 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:50 np0005548915 ceph-mgr[74618]: [progress WARNING root] Starting Global Recovery Event,25 pgs not in active + clean state
Dec  6 04:42:50 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:50 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:50 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:50 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:50 np0005548915 ceph-mon[74327]: Regenerating cephadm self-signed grafana TLS certificates
Dec  6 04:42:50 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:50 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:50 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec  6 04:42:50 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:50 np0005548915 ceph-mon[74327]: Deploying daemon grafana.compute-0 on compute-0
Dec  6 04:42:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:42:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:50.479Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.00046291s
Dec  6 04:42:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:50 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:50 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec  6 04:42:50 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec  6 04:42:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v75: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Dec  6 04:42:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Dec  6 04:42:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  6 04:42:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec  6 04:42:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  6 04:42:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:51 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec  6 04:42:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  6 04:42:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  6 04:42:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec  6 04:42:51 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec  6 04:42:51 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:51 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  6 04:42:51 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  6 04:42:51 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec  6 04:42:51 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec  6 04:42:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:52 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:52 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  6 04:42:52 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  6 04:42:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:52 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c000d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:52 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec  6 04:42:52 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec  6 04:42:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v77: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  6 04:42:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Dec  6 04:42:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  6 04:42:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec  6 04:42:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  6 04:42:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:53 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec  6 04:42:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  6 04:42:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  6 04:42:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec  6 04:42:53 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec  6 04:42:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.601285934s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 211.311798096s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.601237297s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.311798096s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.600256920s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 211.311447144s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.600227356s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.311447144s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.4( v 66'1034 (0'0,66'1034] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.599624634s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=66'1034 lcod 66'1033 mlcod 66'1033 active pruub 211.311111450s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.4( v 66'1034 (0'0,66'1034] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.599576950s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=66'1034 lcod 66'1033 mlcod 0'0 unknown NOTIFY pruub 211.311111450s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.599024773s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 211.311080933s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.598998070s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.311080933s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[6.5( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=58/58 les/c/f=59/60/0 sis=71) [1] r=0 lpr=71 pi=[58,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[6.d( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=71) [1] r=0 lpr=71 pi=[58,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:53 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  6 04:42:53 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  6 04:42:54 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec  6 04:42:54 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec  6 04:42:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:54 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec  6 04:42:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:54 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92240016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:55 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.15 deep-scrub starts
Dec  6 04:42:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v79: 337 pgs: 4 unknown, 2 peering, 331 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  6 04:42:55 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.15 deep-scrub ok
Dec  6 04:42:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec  6 04:42:55 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec  6 04:42:55 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:55 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:55 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:55 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:55 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:55 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:55 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.4( v 66'1034 (0'0,66'1034] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=66'1034 lcod 66'1033 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:55 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.4( v 66'1034 (0'0,66'1034] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=66'1034 lcod 66'1033 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:42:55 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[6.5( v 50'39 lc 48'11 (0'0,50'39] local-lis/les=71/72 n=2 ec=54/21 lis/c=58/58 les/c/f=59/60/0 sis=71) [1] r=0 lpr=71 pi=[58,71)/1 crt=50'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:55 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  6 04:42:55 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  6 04:42:55 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[6.d( v 50'39 lc 48'13 (0'0,50'39] local-lis/les=71/72 n=1 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=71) [1] r=0 lpr=71 pi=[58,71)/1 crt=50'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:55 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c001820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:55 np0005548915 podman[97372]: 2025-12-06 09:42:55.308596724 +0000 UTC m=+5.757647785 container create 006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72 (image=quay.io/ceph/grafana:10.4.0, name=quirky_shockley, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:42:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:42:55 np0005548915 systemd[1]: Started libpod-conmon-006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72.scope.
Dec  6 04:42:55 np0005548915 podman[97372]: 2025-12-06 09:42:55.291871815 +0000 UTC m=+5.740922896 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  6 04:42:55 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:42:55 np0005548915 podman[97372]: 2025-12-06 09:42:55.417113354 +0000 UTC m=+5.866164515 container init 006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72 (image=quay.io/ceph/grafana:10.4.0, name=quirky_shockley, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:42:55 np0005548915 podman[97372]: 2025-12-06 09:42:55.431793427 +0000 UTC m=+5.880844488 container start 006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72 (image=quay.io/ceph/grafana:10.4.0, name=quirky_shockley, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:42:55 np0005548915 podman[97372]: 2025-12-06 09:42:55.435809406 +0000 UTC m=+5.884860467 container attach 006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72 (image=quay.io/ceph/grafana:10.4.0, name=quirky_shockley, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:42:55 np0005548915 quirky_shockley[97593]: 472 0
Dec  6 04:42:55 np0005548915 systemd[1]: libpod-006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72.scope: Deactivated successfully.
Dec  6 04:42:55 np0005548915 podman[97372]: 2025-12-06 09:42:55.439654318 +0000 UTC m=+5.888705419 container died 006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72 (image=quay.io/ceph/grafana:10.4.0, name=quirky_shockley, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:42:55 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7345b0e7728ef9c5d02e03a7c10198624581e37250cf94428cdd6be56d331e4c-merged.mount: Deactivated successfully.
Dec  6 04:42:55 np0005548915 podman[97372]: 2025-12-06 09:42:55.494177341 +0000 UTC m=+5.943228432 container remove 006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72 (image=quay.io/ceph/grafana:10.4.0, name=quirky_shockley, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:42:55 np0005548915 systemd[1]: libpod-conmon-006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72.scope: Deactivated successfully.
Dec  6 04:42:55 np0005548915 podman[97609]: 2025-12-06 09:42:55.581709608 +0000 UTC m=+0.054104572 container create b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1 (image=quay.io/ceph/grafana:10.4.0, name=happy_raman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:42:55 np0005548915 systemd[1]: Started libpod-conmon-b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1.scope.
Dec  6 04:42:55 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:42:55 np0005548915 podman[97609]: 2025-12-06 09:42:55.647286966 +0000 UTC m=+0.119682000 container init b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1 (image=quay.io/ceph/grafana:10.4.0, name=happy_raman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:42:55 np0005548915 podman[97609]: 2025-12-06 09:42:55.559538383 +0000 UTC m=+0.031933407 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  6 04:42:55 np0005548915 happy_raman[97627]: 472 0
Dec  6 04:42:55 np0005548915 podman[97609]: 2025-12-06 09:42:55.659653298 +0000 UTC m=+0.132048262 container start b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1 (image=quay.io/ceph/grafana:10.4.0, name=happy_raman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:42:55 np0005548915 systemd[1]: libpod-b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1.scope: Deactivated successfully.
Dec  6 04:42:55 np0005548915 podman[97609]: 2025-12-06 09:42:55.668823513 +0000 UTC m=+0.141218587 container attach b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1 (image=quay.io/ceph/grafana:10.4.0, name=happy_raman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:42:55 np0005548915 podman[97609]: 2025-12-06 09:42:55.669199043 +0000 UTC m=+0.141594027 container died b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1 (image=quay.io/ceph/grafana:10.4.0, name=happy_raman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:42:55 np0005548915 systemd[1]: var-lib-containers-storage-overlay-28aca0a3e4dd3272a5f192e6d68c442eda19a31b5ea64d4a15cffa5a861c0179-merged.mount: Deactivated successfully.
Dec  6 04:42:55 np0005548915 podman[97609]: 2025-12-06 09:42:55.717647942 +0000 UTC m=+0.190042946 container remove b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1 (image=quay.io/ceph/grafana:10.4.0, name=happy_raman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:42:55 np0005548915 systemd[1]: libpod-conmon-b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1.scope: Deactivated successfully.
Dec  6 04:42:55 np0005548915 systemd[1]: Reloading.
Dec  6 04:42:55 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:42:55 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:42:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:56 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:56 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec  6 04:42:56 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec  6 04:42:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec  6 04:42:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec  6 04:42:56 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec  6 04:42:56 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 73 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:56 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 73 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:56 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 73 pg[10.4( v 66'1034 (0'0,66'1034] local-lis/les=72/73 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[58,72)/1 crt=66'1034 lcod 66'1033 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:56 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 73 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:42:56 np0005548915 systemd[1]: Reloading.
Dec  6 04:42:56 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:42:56 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:42:56 np0005548915 systemd[1]: Starting Ceph grafana.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:42:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:56 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:56 np0005548915 podman[97771]: 2025-12-06 09:42:56.723141443 +0000 UTC m=+0.068061256 container create cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:42:56 np0005548915 podman[97771]: 2025-12-06 09:42:56.686546512 +0000 UTC m=+0.031466405 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  6 04:42:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62646ffda72f68277eee1ddb53fbcad0d452c3540e217585dbd2633e8332ac48/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62646ffda72f68277eee1ddb53fbcad0d452c3540e217585dbd2633e8332ac48/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62646ffda72f68277eee1ddb53fbcad0d452c3540e217585dbd2633e8332ac48/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62646ffda72f68277eee1ddb53fbcad0d452c3540e217585dbd2633e8332ac48/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62646ffda72f68277eee1ddb53fbcad0d452c3540e217585dbd2633e8332ac48/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:56 np0005548915 podman[97771]: 2025-12-06 09:42:56.820626577 +0000 UTC m=+0.165546390 container init cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:42:56 np0005548915 podman[97771]: 2025-12-06 09:42:56.825751565 +0000 UTC m=+0.170671378 container start cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:42:56 np0005548915 bash[97771]: cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2
Dec  6 04:42:56 np0005548915 systemd[1]: Started Ceph grafana.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:42:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:42:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:42:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec  6 04:42:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:56 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 0b16bde7-b1bb-4174-ba29-7d221cc5d567 (Updating grafana deployment (+1 -> 1))
Dec  6 04:42:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec  6 04:42:56 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 0b16bde7-b1bb-4174-ba29-7d221cc5d567 (Updating grafana deployment (+1 -> 1)) in 8 seconds
Dec  6 04:42:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:56 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 8b961573-5d9a-4966-9430-80966b578f70 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec  6 04:42:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Dec  6 04:42:57 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:57 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.vhqyer on compute-0
Dec  6 04:42:57 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.vhqyer on compute-0
Dec  6 04:42:57 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v82: 337 pgs: 4 unknown, 2 peering, 331 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:42:57 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec  6 04:42:57 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093638548Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-06T09:42:57Z
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093899425Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093906605Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093910435Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093914116Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093917466Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093920666Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093927016Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093931276Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093935396Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093938996Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093942156Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093945666Z level=info msg=Target target=[all]
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093955967Z level=info msg="Path Home" path=/usr/share/grafana
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093959437Z level=info msg="Path Data" path=/var/lib/grafana
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093962477Z level=info msg="Path Logs" path=/var/log/grafana
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093965487Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093968757Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093971897Z level=info msg="App mode production"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=sqlstore t=2025-12-06T09:42:57.096010491Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=sqlstore t=2025-12-06T09:42:57.096034572Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.096915376Z level=info msg="Starting DB migrations"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.098119488Z level=info msg="Executing migration" id="create migration_log table"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.099325261Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.205483ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.1019407Z level=info msg="Executing migration" id="create user table"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.102684521Z level=info msg="Migration successfully executed" id="create user table" duration=743.681µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.10454477Z level=info msg="Executing migration" id="add unique index user.login"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.107591292Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=3.039582ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.111067865Z level=info msg="Executing migration" id="add unique index user.email"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.112458312Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.395467ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.115079703Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.118620277Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=3.538974ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.12059818Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.121213937Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=615.827µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.125066411Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.128348179Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.284098ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.130866326Z level=info msg="Executing migration" id="create user table v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.13178245Z level=info msg="Migration successfully executed" id="create user table v2" duration=915.894µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.133902327Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.134709949Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=805.922µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.136660151Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.137463683Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=802.722µs
Dec  6 04:42:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.140139295Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.140660948Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=539.144µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.143420182Z level=info msg="Executing migration" id="Drop old table user_v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.144941604Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.521351ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.14741478Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Dec  6 04:42:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.149951178Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=2.535518ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.153079742Z level=info msg="Executing migration" id="Update user table charset"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.153163614Z level=info msg="Migration successfully executed" id="Update user table charset" duration=86.382µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.156119513Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.158351093Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=2.23061ms
Dec  6 04:42:57 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.161393295Z level=info msg="Executing migration" id="Add missing user data"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.161939479Z level=info msg="Migration successfully executed" id="Add missing user data" duration=547.034µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.164730674Z level=info msg="Executing migration" id="Add is_disabled column to user"
Dec  6 04:42:57 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=6 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.971620560s) [2] async=[2] r=-1 lpr=74 pi=[58,74)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 214.454605103s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:57 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=6 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.971525192s) [2] r=-1 lpr=74 pi=[58,74)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.454605103s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.166351757Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.620473ms
Dec  6 04:42:57 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=5 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.966494560s) [2] async=[2] r=-1 lpr=74 pi=[58,74)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 214.449996948s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:57 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=5 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.966411591s) [2] r=-1 lpr=74 pi=[58,74)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.449996948s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:57 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.4( v 73'1035 (0'0,73'1035] local-lis/les=72/73 n=6 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.970807076s) [2] async=[2] r=-1 lpr=74 pi=[58,74)/1 crt=66'1034 lcod 66'1034 mlcod 66'1034 active pruub 214.454681396s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:57 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.4( v 73'1035 (0'0,73'1035] local-lis/les=72/73 n=6 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.970702171s) [2] r=-1 lpr=74 pi=[58,74)/1 crt=66'1034 lcod 66'1034 mlcod 0'0 unknown NOTIFY pruub 214.454681396s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:57 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=5 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.969923019s) [2] async=[2] r=-1 lpr=74 pi=[58,74)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 214.454483032s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:42:57 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=5 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.969791412s) [2] r=-1 lpr=74 pi=[58,74)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.454483032s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.168362761Z level=info msg="Executing migration" id="Add index user.login/user.email"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.169613505Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.248044ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:57 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92240016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.17169424Z level=info msg="Executing migration" id="Add is_service_account column to user"
Dec  6 04:42:57 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:57 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:57 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:57 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:57 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.172961635Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.266975ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.174913328Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.181603917Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=6.69036ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.183172428Z level=info msg="Executing migration" id="Add uid column to user"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.184100144Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=927.036µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.186067977Z level=info msg="Executing migration" id="Update uid column values for users"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.186229971Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=161.974µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.187887545Z level=info msg="Executing migration" id="Add unique index user_uid"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.18846563Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=577.575µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.190860424Z level=info msg="Executing migration" id="create temp user table v1-7"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.191597164Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=619.497µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.194286826Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.195563541Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.280985ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.19814104Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.198849889Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=739.42µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.202010563Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.202669661Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=658.988µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.204898621Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.205906408Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.008057ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.208681813Z level=info msg="Executing migration" id="Update temp_user table charset"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.208718694Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=37.761µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.211041425Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.211959971Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=918.866µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.213972884Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.214611412Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=638.708µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.216576044Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.217257123Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=681.359µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.219314857Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.219987196Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=672.949µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.221940608Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.224749403Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.808635ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.227102866Z level=info msg="Executing migration" id="create temp_user v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.228659558Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.558562ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.231270419Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.232336777Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.078049ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.234550856Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.236079107Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.537121ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.238467641Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.239365746Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=906.505µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.241355018Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.242227472Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=871.954µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.24510303Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.245798838Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=695.979µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.247759611Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.248527481Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=767.68µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.250876004Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.251408618Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=532.164µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.253844544Z level=info msg="Executing migration" id="create star table"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.25484166Z level=info msg="Migration successfully executed" id="create star table" duration=997.386µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.257014429Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.257963804Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=948.345µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.260704557Z level=info msg="Executing migration" id="create org table v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.261619281Z level=info msg="Migration successfully executed" id="create org table v1" duration=914.544µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.264224571Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.265103386Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=878.265µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.267461619Z level=info msg="Executing migration" id="create org_user table v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.26824087Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=779.021µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.270392398Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.271328522Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=936.225µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.273670605Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.27461569Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=940.995µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.278407002Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.279299966Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=892.774µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.281733472Z level=info msg="Executing migration" id="Update org table charset"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.281762352Z level=info msg="Migration successfully executed" id="Update org table charset" duration=30.03µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.283716324Z level=info msg="Executing migration" id="Update org_user table charset"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.283755095Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=40.141µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.285997095Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.286238182Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=240.847µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.289219422Z level=info msg="Executing migration" id="create dashboard table"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.290554248Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.335006ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.293259791Z level=info msg="Executing migration" id="add index dashboard.account_id"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.29435753Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.097429ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.296817526Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.297895234Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.077128ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.301515332Z level=info msg="Executing migration" id="create dashboard_tag table"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.302339704Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=813.712µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.304704577Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.305648582Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=943.835µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.307950354Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.308948951Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.010307ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.310701218Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.317426548Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.72422ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.319414812Z level=info msg="Executing migration" id="create dashboard v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.320325656Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=910.274µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.322615687Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.323538733Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=922.496µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.32641934Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.327365915Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=945.915µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.3297798Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.330219531Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=448.321µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.332941144Z level=info msg="Executing migration" id="drop table dashboard_v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.334212118Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.270894ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.336864919Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.337004803Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=95.003µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.339429278Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.342001807Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.571929ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.343815676Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.345778439Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.962163ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.348173392Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.350105335Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.931923ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.352394866Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.353360862Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=965.696µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.355399797Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.35740026Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.999963ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.359247599Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.360216756Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=968.467µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.362706233Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.363686849Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=981.216µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.366190986Z level=info msg="Executing migration" id="Update dashboard table charset"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.366231857Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=39.031µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.36858332Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.368624392Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=44.131µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.370595274Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.372826984Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.23236ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.374891329Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.376932194Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.040495ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.379228865Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.381300281Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.068286ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.383576812Z level=info msg="Executing migration" id="Add column uid in dashboard"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.38571603Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.138808ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.387896378Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.38833811Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=441.982µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.390481587Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.391405342Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=924.095µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.39393114Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.394875255Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=943.815µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.39767276Z level=info msg="Executing migration" id="Update dashboard title length"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.397699771Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=27.151µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.399790907Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.400736502Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=948.235µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.402471719Z level=info msg="Executing migration" id="create dashboard_provisioning"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.403311291Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=840.212µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.405696035Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.411447459Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.750674ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.413767592Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.414611954Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=844.342µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.41705137Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.417975554Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=924.484µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.42078922Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.421707645Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=917.945µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.424367166Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.424844569Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=477.053µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.426745219Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.42750263Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=732.55µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.430830949Z level=info msg="Executing migration" id="Add check_sum column"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.433060729Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.22924ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.43609196Z level=info msg="Executing migration" id="Add index for dashboard_title"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.437358514Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.267614ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.440222351Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.440448467Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=226.526µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.442354698Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.442572674Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=232.816µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.444346242Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.445336778Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=989.936µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.448467162Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.451242146Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.774474ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.453410935Z level=info msg="Executing migration" id="create data_source table"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.454572225Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.16125ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.45697152Z level=info msg="Executing migration" id="add index data_source.account_id"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.457878305Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=905.915µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.460184116Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.461112471Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=930.945µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.463910686Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.464821761Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=907.934µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.466812694Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.46780164Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=987.696µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.46964891Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.476099533Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.446153ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.478087416Z level=info msg="Executing migration" id="create data_source table v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.479170675Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.082769ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.481225511Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.482232487Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.006597ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.484413146Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.485800594Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.388348ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.489922934Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.491197837Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.276303ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.493265413Z level=info msg="Executing migration" id="Add column with_credentials"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.495942715Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.676532ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.498674618Z level=info msg="Executing migration" id="Add secure json data column"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.50133254Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.654351ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.504121114Z level=info msg="Executing migration" id="Update data_source table charset"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.504154305Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=31.691µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.506135709Z level=info msg="Executing migration" id="Update initial version to 1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.506386505Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=251.696µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.508539493Z level=info msg="Executing migration" id="Add read_only data column"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.511135203Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.59459ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.513049623Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.513322431Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=272.308µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.51552801Z level=info msg="Executing migration" id="Update json_data with nulls"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.515854919Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=334.519µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.51776128Z level=info msg="Executing migration" id="Add uid column"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.519835756Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.074486ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.522007314Z level=info msg="Executing migration" id="Update uid value"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.52221645Z level=info msg="Migration successfully executed" id="Update uid value" duration=209.877µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.524199753Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.524998675Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=798.912µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.527197183Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.527854971Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=657.618µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.529976287Z level=info msg="Executing migration" id="create api_key table"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.530673587Z level=info msg="Migration successfully executed" id="create api_key table" duration=696.78µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.534000956Z level=info msg="Executing migration" id="add index api_key.account_id"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.534635003Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=633.357µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.536845382Z level=info msg="Executing migration" id="add index api_key.key"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.537473469Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=627.747µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.539853913Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.540546221Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=691.988µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.543012017Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.543674045Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=661.828µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.545749301Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.546392038Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=642.467µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.548610777Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.550339764Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.733237ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.552931453Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.562097509Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=9.165356ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.564674658Z level=info msg="Executing migration" id="create api_key table v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.566046195Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.369756ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.568673615Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.570062843Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.389368ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.572325963Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.573739961Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.412988ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.57591108Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.577258565Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.344135ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.581122279Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.581828638Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=705.609µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.584152741Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.585322212Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.169001ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.588055585Z level=info msg="Executing migration" id="Update api_key table charset"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.588122757Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=68.412µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.590573503Z level=info msg="Executing migration" id="Add expires to api_key table"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.594916688Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.342576ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.596887742Z level=info msg="Executing migration" id="Add service account foreign key"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.6012929Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=4.404179ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.603750096Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.604069595Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=318.618µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.606710386Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.612020208Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=5.308433ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.614981957Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.619880278Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=4.897441ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.623165467Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.62479754Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.630283ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.627349068Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.628614642Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=1.264844ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.631288484Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.63301452Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.724626ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.635260571Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.637366858Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=2.103096ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.653993903Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.656152511Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=2.160167ms
Dec  6 04:42:57 np0005548915 podman[97899]: 2025-12-06 09:42:57.623151516 +0000 UTC m=+0.031425944 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  6 04:42:57 np0005548915 podman[97899]: 2025-12-06 09:42:57.924824495 +0000 UTC m=+0.333098863 container create a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e (image=quay.io/ceph/haproxy:2.3, name=serene_bhabha)
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.926381717Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.930104027Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=3.723829ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.936853018Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.937012542Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=157.544µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.940164697Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.940233938Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=61.901µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.94365511Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.948063189Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.408929ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.951203833Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.954513181Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.308468ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.96714544Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.967689495Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=547.766µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.969966245Z level=info msg="Executing migration" id="create quota table v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.971338852Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.371997ms
Dec  6 04:42:57 np0005548915 systemd[1]: Started libpod-conmon-a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e.scope.
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.975412111Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.976656495Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.248554ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.979348677Z level=info msg="Executing migration" id="Update quota table charset"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.979373508Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=25.001µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.981460263Z level=info msg="Executing migration" id="create plugin_setting table"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.98244076Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=979.747µs
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.985183904Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.986235782Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.052437ms
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.991191315Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Dec  6 04:42:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.996932779Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=5.744154ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.001721777Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.001769729Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=48.822µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.004699787Z level=info msg="Executing migration" id="create session table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.006643298Z level=info msg="Migration successfully executed" id="create session table" duration=1.939771ms
Dec  6 04:42:58 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.01155712Z level=info msg="Executing migration" id="Drop old table playlist table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.01190941Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=351.96µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.014592292Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.014780667Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=186.245µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.017654494Z level=info msg="Executing migration" id="create playlist table v2"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.019339109Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.684655ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.022844513Z level=info msg="Executing migration" id="create playlist item table v2"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.024437646Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.593153ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.028788142Z level=info msg="Executing migration" id="Update playlist table charset"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.028830383Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=43.551µs
Dec  6 04:42:58 np0005548915 podman[97899]: 2025-12-06 09:42:58.030315724 +0000 UTC m=+0.438590132 container init a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e (image=quay.io/ceph/haproxy:2.3, name=serene_bhabha)
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.03130977Z level=info msg="Executing migration" id="Update playlist_item table charset"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.031353681Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=45.091µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.033812567Z level=info msg="Executing migration" id="Add playlist column created_at"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:58 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec  6 04:42:58 np0005548915 podman[97899]: 2025-12-06 09:42:58.039357216 +0000 UTC m=+0.447631554 container start a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e (image=quay.io/ceph/haproxy:2.3, name=serene_bhabha)
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.040006533Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=6.191076ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.044110864Z level=info msg="Executing migration" id="Add playlist column updated_at"
Dec  6 04:42:58 np0005548915 podman[97899]: 2025-12-06 09:42:58.045519851 +0000 UTC m=+0.453794279 container attach a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e (image=quay.io/ceph/haproxy:2.3, name=serene_bhabha)
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.047411042Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.300088ms
Dec  6 04:42:58 np0005548915 serene_bhabha[97916]: 0 0
Dec  6 04:42:58 np0005548915 systemd[1]: libpod-a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e.scope: Deactivated successfully.
Dec  6 04:42:58 np0005548915 podman[97899]: 2025-12-06 09:42:58.04917962 +0000 UTC m=+0.457453948 container died a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e (image=quay.io/ceph/haproxy:2.3, name=serene_bhabha)
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.049462187Z level=info msg="Executing migration" id="drop preferences table v2"
Dec  6 04:42:58 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.049610421Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=149.944µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.051854962Z level=info msg="Executing migration" id="drop preferences table v3"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.051966995Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=105.702µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.054352168Z level=info msg="Executing migration" id="create preferences table v3"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.055278353Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=926.655µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.057736749Z level=info msg="Executing migration" id="Update preferences table charset"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.05776355Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=28.281µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.060532364Z level=info msg="Executing migration" id="Add column team_id in preferences"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.063906004Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.3561ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.067568132Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.06784867Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=277.868µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.070734307Z level=info msg="Executing migration" id="Add column week_start in preferences"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.074367655Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.634188ms
Dec  6 04:42:58 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1397aec040c9da5e5432c37bf5af7406d5f723d1f473b391beb1d7118b9381ef-merged.mount: Deactivated successfully.
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.076178313Z level=info msg="Executing migration" id="Add column preferences.json_data"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.079358978Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.180825ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.08128909Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.081429094Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=138.294µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.084096586Z level=info msg="Executing migration" id="Add preferences index org_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.085282498Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.184072ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.088196106Z level=info msg="Executing migration" id="Add preferences index user_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.089273295Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.078218ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.093559009Z level=info msg="Executing migration" id="create alert table v1"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.095261935Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.702516ms
Dec  6 04:42:58 np0005548915 podman[97899]: 2025-12-06 09:42:58.095930473 +0000 UTC m=+0.504204811 container remove a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e (image=quay.io/ceph/haproxy:2.3, name=serene_bhabha)
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.097861825Z level=info msg="Executing migration" id="add index alert org_id & id "
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.098989075Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.12588ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.101371349Z level=info msg="Executing migration" id="add index alert state"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.102313135Z level=info msg="Migration successfully executed" id="add index alert state" duration=941.496µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.104735479Z level=info msg="Executing migration" id="add index alert dashboard_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.105750796Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.015107ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.109280591Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.110801442Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.530531ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.114513221Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.115392965Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=880.044µs
Dec  6 04:42:58 np0005548915 systemd[1]: libpod-conmon-a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e.scope: Deactivated successfully.
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.11891776Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.119741592Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=824.082µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.121597001Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.129160354Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=7.561613ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.130732856Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.131330952Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=597.686µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.132976977Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.133643845Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=669.827µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.136940223Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.13720623Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=266.347µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.138850864Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.139399669Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=545.335µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.141196426Z level=info msg="Executing migration" id="create alert_notification table v1"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.141859265Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=661.659µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.144013732Z level=info msg="Executing migration" id="Add column is_default"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.148369219Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.353887ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.150764023Z level=info msg="Executing migration" id="Add column frequency"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.15438794Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.626957ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.156469876Z level=info msg="Executing migration" id="Add column send_reminder"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.159384404Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.914058ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.161507312Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Dec  6 04:42:58 np0005548915 systemd[1]: Reloading.
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.223870594Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=62.350871ms
Dec  6 04:42:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.227390928Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.2285796Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.192742ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.231099308Z level=info msg="Executing migration" id="Update alert table charset"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.231129789Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=31.551µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.233176713Z level=info msg="Executing migration" id="Update alert_notification table charset"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.233200383Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=25.55µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.234810017Z level=info msg="Executing migration" id="create notification_journal table v1"
Dec  6 04:42:58 np0005548915 ceph-mon[74327]: Deploying daemon haproxy.rgw.default.compute-0.vhqyer on compute-0
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.235765952Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=955.455µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.23938684Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.240450058Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.064758ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.243591512Z level=info msg="Executing migration" id="drop alert_notification_journal"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.245025711Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.436679ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.247465866Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.248270568Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=803.782µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.251664269Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.252708147Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.042898ms
Dec  6 04:42:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.257116385Z level=info msg="Executing migration" id="Add for to alert table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.261520653Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.397887ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.263903057Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Dec  6 04:42:58 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.267108693Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.203195ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.268988613Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.269163868Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=175.405µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.27109062Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.27185155Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=762.93µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.273868514Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.274687826Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=818.842µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.276220618Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.279321291Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.100523ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.282408783Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.282622129Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=219.556µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.284745646Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.285881936Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.13634ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.287649034Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.288439135Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=793.621µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.291004944Z level=info msg="Executing migration" id="Drop old annotation table v4"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.291085496Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=80.992µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.293592173Z level=info msg="Executing migration" id="create annotation table v5"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.294406465Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=814.162µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.296747078Z level=info msg="Executing migration" id="add index annotation 0 v3"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.297426556Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=678.918µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.300090147Z level=info msg="Executing migration" id="add index annotation 1 v3"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.300760955Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=670.978µs
Dec  6 04:42:58 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:42:58 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.303669893Z level=info msg="Executing migration" id="add index annotation 2 v3"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.304807664Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.137391ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.307671061Z level=info msg="Executing migration" id="add index annotation 3 v3"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.308451871Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=780.79µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.31061592Z level=info msg="Executing migration" id="add index annotation 4 v3"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.31137597Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=756.29µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.313544178Z level=info msg="Executing migration" id="Update annotation table charset"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.313570509Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=27.751µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.31507853Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.318291116Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.211286ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.320440493Z level=info msg="Executing migration" id="Drop category_id index"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.321235485Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=789.791µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.323113284Z level=info msg="Executing migration" id="Add column tags to annotation table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.325963491Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=2.846307ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.327472942Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.328043777Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=570.225µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.32964684Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.330472782Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=827.682µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.332429155Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.333170594Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=741.479µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.334971123Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.344007745Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=9.031242ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.345811624Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.346498652Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=662.167µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.348071274Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.348793634Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=722.08µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.351755163Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.35202095Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=265.997µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.353972712Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.354653251Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=680.419µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.35649734Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.356668424Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=171.215µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.359388857Z level=info msg="Executing migration" id="Add created time to annotation table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.362630835Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.243518ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.364648068Z level=info msg="Executing migration" id="Add updated time to annotation table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.367569196Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=2.923128ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.369605781Z level=info msg="Executing migration" id="Add index for created in annotation table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.370377052Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=771.231µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.372350435Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.373021943Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=716.569µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.375477749Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.375687844Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=216.075µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.377449672Z level=info msg="Executing migration" id="Add epoch_end column"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.380569195Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.117843ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.382340863Z level=info msg="Executing migration" id="Add index for epoch_end"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.383129284Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=787.861µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.385322763Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.385506128Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=183.715µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.388283582Z level=info msg="Executing migration" id="Move region to single row"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.388603441Z level=info msg="Migration successfully executed" id="Move region to single row" duration=320.529µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.390585033Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.391380065Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=794.032µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.393125152Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.393920594Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=798.963µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.399714849Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.400588212Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=873.613µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.40238227Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.403077909Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=695.579µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.40572268Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.406526431Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=803.031µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.408836293Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.409707686Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=871.133µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.411350091Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.411400182Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=48.451µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.413879719Z level=info msg="Executing migration" id="create test_data table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.414608168Z level=info msg="Migration successfully executed" id="create test_data table" duration=727.809µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.416668863Z level=info msg="Executing migration" id="create dashboard_version table v1"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.41728757Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=618.767µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.420227879Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.420896676Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=668.767µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.423385763Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.424096542Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=710.229µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.427290089Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.427507854Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=219.896µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.430526855Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.430914415Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=389.73µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.434548692Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.434606834Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=58.342µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.436342411Z level=info msg="Executing migration" id="create team table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.437111691Z level=info msg="Migration successfully executed" id="create team table" duration=768.87µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.439323691Z level=info msg="Executing migration" id="add index team.org_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.440192584Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=865.323µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.442984389Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.443984266Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=999.897µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.446290817Z level=info msg="Executing migration" id="Add column uid in team"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.449564596Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.273839ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.451589739Z level=info msg="Executing migration" id="Update uid column values in team"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.451730153Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=139.594µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.454327303Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.455031762Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=707.059µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.457140118Z level=info msg="Executing migration" id="create team member table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.457774775Z level=info msg="Migration successfully executed" id="create team member table" duration=632.077µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.460027426Z level=info msg="Executing migration" id="add index team_member.org_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.460722585Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=694.139µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.463648853Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.470758994Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=7.10911ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.473947459Z level=info msg="Executing migration" id="add index team_member.team_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.475229944Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.286685ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.477810253Z level=info msg="Executing migration" id="Add column email to team table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:58.480Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.001687872s
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.48220578Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.394607ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.484213244Z level=info msg="Executing migration" id="Add column external to team_member table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.487597445Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.383781ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.490583485Z level=info msg="Executing migration" id="Add column permission to team_member table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.493931665Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.34754ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.495576149Z level=info msg="Executing migration" id="create dashboard acl table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.49638166Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=805.211µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.499308609Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.500049229Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=740.1µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.502300889Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.503336577Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.035448ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.505391912Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.506632086Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.238074ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.510658753Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.511430395Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=771.022µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.513628813Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.514433095Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=804.522µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.516572462Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.517286201Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=713.249µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.519193232Z level=info msg="Executing migration" id="add index dashboard_permission"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.519931572Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=737.86µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.521567086Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.521996357Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=429.191µs
Dec  6 04:42:58 np0005548915 systemd[1]: Reloading.
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.526401166Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.526629442Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=228.126µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.528464981Z level=info msg="Executing migration" id="create tag table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.529110578Z level=info msg="Migration successfully executed" id="create tag table" duration=645.537µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.531478862Z level=info msg="Executing migration" id="add index tag.key_value"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.532183461Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=706.709µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.534449142Z level=info msg="Executing migration" id="create login attempt table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.535092529Z level=info msg="Migration successfully executed" id="create login attempt table" duration=640.557µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.537068012Z level=info msg="Executing migration" id="add index login_attempt.username"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.53777184Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=703.038µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.540435542Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.541160101Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=723.999µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.543057972Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.555370962Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=12.31172ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.55753007Z level=info msg="Executing migration" id="create login_attempt v2"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.558227619Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=696.959µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.560188242Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.561020714Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=829.842µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.563462409Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.563843829Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=381.47µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.565562376Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.566279445Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=717.069µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.56910884Z level=info msg="Executing migration" id="create user auth table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.569935073Z level=info msg="Migration successfully executed" id="create user auth table" duration=825.683µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.572621304Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.573662613Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.040899ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.576072928Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.576137169Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=64.041µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.580032564Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.587679319Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.646726ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.592779536Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.597307367Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.534921ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.599430314Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.603174424Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.74346ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c001820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.604930851Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.608719292Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.785531ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.61083981Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.611763314Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=923.764µs
Dec  6 04:42:58 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.614193209Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Dec  6 04:42:58 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.617875009Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.6783ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.619545183Z level=info msg="Executing migration" id="create server_lock table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.620298853Z level=info msg="Migration successfully executed" id="create server_lock table" duration=753.49µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.622543083Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.623377846Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=836.973µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.625361759Z level=info msg="Executing migration" id="create user auth token table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.626081419Z level=info msg="Migration successfully executed" id="create user auth token table" duration=719.75µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.628454292Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.629960432Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.50491ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.632166292Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.632915962Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=752.12µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.635245194Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.636239431Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=995.537µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.638644055Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.643524246Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=4.880621ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.645825458Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.646744203Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=918.384µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.649058255Z level=info msg="Executing migration" id="create cache_data table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.649772394Z level=info msg="Migration successfully executed" id="create cache_data table" duration=713.959µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.6518856Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.652596919Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=711.339µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.654665095Z level=info msg="Executing migration" id="create short_url table v1"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.655376514Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=711.589µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.658055896Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.658839186Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=783.48µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.661129858Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.661173859Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=44.501µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.662948717Z level=info msg="Executing migration" id="delete alert_definition table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.66303542Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=89.333µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.665007802Z level=info msg="Executing migration" id="recreate alert_definition table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.665706221Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=695.129µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.669130743Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.670189851Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.058768ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.673209222Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.674038435Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=828.693µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.675982546Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.676051378Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=69.482µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.678154325Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.678939425Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=785.221µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.680827076Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.681594907Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=767.291µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.68393757Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.68471445Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=776.55µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.6869483Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.687839565Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=890.885µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.689674253Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.695878249Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.197996ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.698219412Z level=info msg="Executing migration" id="drop alert_definition table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.699652971Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.433359ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.702191259Z level=info msg="Executing migration" id="delete alert_definition_version table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.702260921Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=70.172µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.703810912Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.704561813Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=748.091µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.706805613Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.707607535Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=801.983µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.709657389Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.710435871Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=778.322µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.712210038Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.712259049Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=51.942µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.713904823Z level=info msg="Executing migration" id="drop alert_definition_version table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.714832308Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=924.755µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.716620866Z level=info msg="Executing migration" id="create alert_instance table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.717433098Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=811.212µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.719117423Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.719936315Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=818.562µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.72198566Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.723159231Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.171151ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.72610445Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.730864798Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.759988ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.732655596Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.733576611Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=920.125µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.735982385Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.737000032Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.019377ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.738977736Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.762441064Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=23.415607ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.764947802Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.787464055Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=22.469212ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.789944072Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.791109073Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.164661ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.793091436Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.794034392Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=941.946µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.796807707Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.801562414Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.754326ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.803539507Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.807534873Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=3.995106ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.809909787Z level=info msg="Executing migration" id="create alert_rule table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.810691478Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=781.131µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.813537434Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.814597873Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.060159ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.817430209Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.818400895Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=970.536µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.821554989Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.822470485Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=914.906µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.824839298Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.824924731Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=85.243µs
Dec  6 04:42:58 np0005548915 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.vhqyer for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.826841871Z level=info msg="Executing migration" id="add column for to alert_rule"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.831167888Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.326507ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.833019507Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.838129994Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.107107ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.84060066Z level=info msg="Executing migration" id="add column labels to alert_rule"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.845624135Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.017405ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.848919683Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.84991243Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=993.897µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.85178519Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.852724345Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=939.935µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.854472532Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.858814969Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.342077ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.861575333Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.868038046Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.454793ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.870631986Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.871876819Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.244953ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.8745324Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.880255913Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.722643ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.882405281Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.886859641Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.45427ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.888707Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.888793252Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=86.782µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.891381082Z level=info msg="Executing migration" id="create alert_rule_version table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.89239671Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.015788ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.895080322Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.896012706Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=932.224µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.898119283Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.899082129Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=960.125µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.9013789Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.901423642Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=45.051µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.903216139Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.907859204Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=4.643275ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.910116964Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.914741389Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.622335ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.916594528Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.9226625Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.063702ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.924645584Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.929041762Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.396798ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.93084082Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.935866045Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=5.021265ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.938413823Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.938471165Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=57.882µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.940459848Z level=info msg="Executing migration" id=create_alert_configuration_table
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.941151667Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=694.249µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.943274674Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.94801103Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.733506ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.950385304Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.950459896Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=78.272µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.952266025Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.957912846Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=5.645101ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.959764356Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.96068465Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=920.054µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.963565668Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.969054035Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=5.487047ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.970969196Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.971682325Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=712.609µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.974334667Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.975581739Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.249363ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.977865621Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.984161309Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.293068ms
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.986552404Z level=info msg="Executing migration" id="create provenance_type table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.987386037Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=833.243µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.989861602Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.990783618Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=921.156µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.993148381Z level=info msg="Executing migration" id="create alert_image table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.993964173Z level=info msg="Migration successfully executed" id="create alert_image table" duration=817.792µs
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.996634874Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Dec  6 04:42:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.997361164Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=726.2µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.999796299Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.99984309Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=47.451µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.001797123Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.002619525Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=822.342µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.005070531Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.006310884Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.240632ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.008324408Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.008744559Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Dec  6 04:42:59 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v85: 337 pgs: 4 unknown, 2 peering, 331 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.010608009Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.011066371Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=456.062µs
Dec  6 04:42:59 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.012629694Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.013534468Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=903.944µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.015371737Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.020904385Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=5.532148ms
Dec  6 04:42:59 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.022720754Z level=info msg="Executing migration" id="create library_element table v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.023865935Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.144961ms
Dec  6 04:42:59 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:42:59 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.026805084Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Dec  6 04:42:59 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:42:59 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.027916663Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.111609ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.030295077Z level=info msg="Executing migration" id="create library_element_connection table v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.03117137Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=875.893µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.033744419Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.035274421Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.529442ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.037921482Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.03897535Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.051608ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.041064926Z level=info msg="Executing migration" id="increase max description length to 2048"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.041166929Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=104.293µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.042927645Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.043035198Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=105.223µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.044907759Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.045267778Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=360.329µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.047078297Z level=info msg="Executing migration" id="create data_keys table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.048179836Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.101779ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.051279739Z level=info msg="Executing migration" id="create secrets table"
Dec  6 04:42:59 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:42:59 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.051998799Z level=info msg="Migration successfully executed" id="create secrets table" duration=721.27µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.054161356Z level=info msg="Executing migration" id="rename data_keys name column to id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.082802405Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=28.633809ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.085137187Z level=info msg="Executing migration" id="add name column into data_keys"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.09194744Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=6.808203ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.094253942Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.094473968Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=220.186µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.096277706Z level=info msg="Executing migration" id="rename data_keys name column to label"
Dec  6 04:42:59 np0005548915 podman[98065]: 2025-12-06 09:42:59.114645789 +0000 UTC m=+0.052768216 container create 8307d569d32f641dfd216329bf28a6dd6c231023fe8a6bc71cdd2d75ff9fd46f (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-rgw-default-compute-0-vhqyer)
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.128468149Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=32.182723ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.130246167Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.16204448Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=31.792783ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.163969362Z level=info msg="Executing migration" id="create kv_store table v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.164861775Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=891.623µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.167128196Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.168065601Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=937.135µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.170130557Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.170367183Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=237.576µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.171938734Z level=info msg="Executing migration" id="create permission table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.173058935Z level=info msg="Migration successfully executed" id="create permission table" duration=1.119781ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:59 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:42:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b58fcfff812beea46e10342d748115dd64bff4593d725f7ba67cb37c86b189/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.176207949Z level=info msg="Executing migration" id="add unique index permission.role_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.177113273Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=907.514µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.179814046Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.180920006Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.11014ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.183255198Z level=info msg="Executing migration" id="create role table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.184169353Z level=info msg="Migration successfully executed" id="create role table" duration=914.685µs
Dec  6 04:42:59 np0005548915 podman[98065]: 2025-12-06 09:42:59.092554026 +0000 UTC m=+0.030676433 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.186608169Z level=info msg="Executing migration" id="add column display_name"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.192952138Z level=info msg="Migration successfully executed" id="add column display_name" duration=6.340619ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.195100906Z level=info msg="Executing migration" id="add column group_name"
Dec  6 04:42:59 np0005548915 podman[98065]: 2025-12-06 09:42:59.200274905 +0000 UTC m=+0.138397392 container init 8307d569d32f641dfd216329bf28a6dd6c231023fe8a6bc71cdd2d75ff9fd46f (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-rgw-default-compute-0-vhqyer)
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.200302306Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.19997ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.20233508Z level=info msg="Executing migration" id="add index role.org_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.203381678Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.046778ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.206170443Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.207275372Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.105109ms
Dec  6 04:42:59 np0005548915 podman[98065]: 2025-12-06 09:42:59.209349958 +0000 UTC m=+0.147472375 container start 8307d569d32f641dfd216329bf28a6dd6c231023fe8a6bc71cdd2d75ff9fd46f (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-rgw-default-compute-0-vhqyer)
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.210616522Z level=info msg="Executing migration" id="add index role_org_id_uid"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.211532497Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=913.385µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.214067624Z level=info msg="Executing migration" id="create team role table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.214795594Z level=info msg="Migration successfully executed" id="create team role table" duration=727.87µs
Dec  6 04:42:59 np0005548915 bash[98065]: 8307d569d32f641dfd216329bf28a6dd6c231023fe8a6bc71cdd2d75ff9fd46f
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.217017104Z level=info msg="Executing migration" id="add index team_role.org_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.218050341Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.033107ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.220325233Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.221230147Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=904.114µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.223684182Z level=info msg="Executing migration" id="add index team_role.team_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.224530445Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=845.893µs
Dec  6 04:42:59 np0005548915 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.vhqyer for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.22733554Z level=info msg="Executing migration" id="create user role table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-rgw-default-compute-0-vhqyer[98080]: [NOTICE] 339/094259 (2) : New worker #1 (4) forked
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.228220855Z level=info msg="Migration successfully executed" id="create user role table" duration=885.435µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.23071082Z level=info msg="Executing migration" id="add index user_role.org_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.232470998Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.762458ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.235705725Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.237336789Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.630575ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.240290188Z level=info msg="Executing migration" id="add index user_role.user_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.242060785Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.772097ms
Dec  6 04:42:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.244817599Z level=info msg="Executing migration" id="create builtin role table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.245876348Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.060639ms
Dec  6 04:42:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.004000106s ======
Dec  6 04:42:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:42:59.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000106s
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.248666112Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.249623568Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=957.806µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.251629382Z level=info msg="Executing migration" id="add index builtin_role.name"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.252456254Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=826.712µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.255134586Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.261094355Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=5.960049ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.262802711Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.263664634Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=859.473µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.266029748Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.266913721Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=883.403µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.269331456Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.27021817Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=886.854µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.271845524Z level=info msg="Executing migration" id="add unique index role.uid"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.272661935Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=816.271µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.274997198Z level=info msg="Executing migration" id="create seed assignment table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.275694717Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=696.628µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.277574577Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.27841982Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=845.173µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.286400984Z level=info msg="Executing migration" id="add column hidden to role table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.29260137Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=6.196636ms
Dec  6 04:42:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.294581873Z level=info msg="Executing migration" id="permission kind migration"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.30042309Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.840517ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.302147636Z level=info msg="Executing migration" id="permission attribute migration"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.308084085Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.935049ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.309832253Z level=info msg="Executing migration" id="permission identifier migration"
Dec  6 04:42:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.315806573Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.97375ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.317820846Z level=info msg="Executing migration" id="add permission identifier index"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.31870364Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=882.854µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.322135862Z level=info msg="Executing migration" id="add permission action scope role_id index"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.323640213Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.503811ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.32577804Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Dec  6 04:42:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.326690485Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=912.505µs
Dec  6 04:42:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.328635416Z level=info msg="Executing migration" id="create query_history table v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.329436158Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=804.822µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.331579835Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.332453109Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=870.634µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.334478753Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.334614926Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=136.813µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.336181989Z level=info msg="Executing migration" id="rbac disabled migrator"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.336255561Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=73.982µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.337771642Z level=info msg="Executing migration" id="teams permissions migration"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.338239124Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=467.752µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.33995988Z level=info msg="Executing migration" id="dashboard permissions"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.340413763Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=454.713µs
Dec  6 04:42:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.342139298Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.342726504Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=587.156µs
Dec  6 04:42:59 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.mwbfro on compute-2
Dec  6 04:42:59 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.mwbfro on compute-2
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.345526999Z level=info msg="Executing migration" id="drop managed folder create actions"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.345747705Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=222.746µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.34740795Z level=info msg="Executing migration" id="alerting notification permissions"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.347940484Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=532.004µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.350048771Z level=info msg="Executing migration" id="create query_history_star table v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.350937394Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=888.213µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.352913797Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.353811111Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=897.044µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.355673151Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.361330923Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=5.657282ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.363393459Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.363501072Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=108.203µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.365136085Z level=info msg="Executing migration" id="create correlation table v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.366244515Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.10791ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.368743542Z level=info msg="Executing migration" id="add index correlations.uid"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.370061977Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.319535ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.372424281Z level=info msg="Executing migration" id="add index correlations.source_uid"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.373744456Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.319845ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.37614828Z level=info msg="Executing migration" id="add correlation config column"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.385384939Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.235978ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.387303409Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.388307157Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=999.078µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.390013502Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.390957918Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=944.686µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.393002622Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.413646306Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=20.632344ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.41601305Z level=info msg="Executing migration" id="create correlation v2"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.417423857Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.402777ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.419536014Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.420777297Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.241263ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.423621843Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.424664922Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.040819ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.427095666Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.427993811Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=897.975µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.430325494Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.430598131Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=273.117µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.432170252Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.433031326Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=861.104µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.434857335Z level=info msg="Executing migration" id="add provisioning column"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.440760433Z level=info msg="Migration successfully executed" id="add provisioning column" duration=5.899878ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.442952282Z level=info msg="Executing migration" id="create entity_events table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.443866366Z level=info msg="Migration successfully executed" id="create entity_events table" duration=917.975µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.445632394Z level=info msg="Executing migration" id="create dashboard public config v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.446542978Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=909.985µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.449147068Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.449686483Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.451643795Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.452014054Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.453787392Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.454913982Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.1267ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.457032679Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.458119269Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.08502ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.460679267Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.461931431Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.252284ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.464188651Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.465540437Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.352016ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.46788469Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.469102943Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.237412ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.471119087Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.472309889Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.190942ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.474084006Z level=info msg="Executing migration" id="Drop public config table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.475368941Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.285555ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.477618772Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.478866375Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.247923ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.480751236Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.481863815Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.112599ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.483610832Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.484769333Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.160151ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.486706485Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.487946598Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.240783ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.490445935Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.513469032Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=22.993716ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.516343009Z level=info msg="Executing migration" id="add annotations_enabled column"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.52528717Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.944861ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.527810067Z level=info msg="Executing migration" id="add time_selection_enabled column"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.535845152Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.009725ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.537862257Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.538169005Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=307.998µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.539997954Z level=info msg="Executing migration" id="add share column"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.547735232Z level=info msg="Migration successfully executed" id="add share column" duration=7.729058ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.550153546Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.550554357Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=402.041µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.552263193Z level=info msg="Executing migration" id="create file table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.553436355Z level=info msg="Migration successfully executed" id="create file table" duration=1.170872ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.557077572Z level=info msg="Executing migration" id="file table idx: path natural pk"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.558603882Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.52891ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.561248453Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.562649651Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.403218ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.565140508Z level=info msg="Executing migration" id="create file_meta table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.566189326Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.049368ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.568526629Z level=info msg="Executing migration" id="file table idx: path key"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.569945216Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.419017ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.572062573Z level=info msg="Executing migration" id="set path collation in file table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.572205167Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=144.544µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.574360425Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.57452655Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=167.385µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.576348719Z level=info msg="Executing migration" id="managed permissions migration"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.577054748Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=703.008µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.57937809Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.579828172Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=455.382µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.581796025Z level=info msg="Executing migration" id="RBAC action name migrator"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.583322905Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.52723ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.585511454Z level=info msg="Executing migration" id="Add UID column to playlist"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.593625242Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.107358ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.595952965Z level=info msg="Executing migration" id="Update uid column values in playlist"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.596300344Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=349.68µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.598262877Z level=info msg="Executing migration" id="Add index for uid in playlist"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.599843818Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.580981ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.602560151Z level=info msg="Executing migration" id="update group index for alert rules"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.603151907Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=592.556µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.605525491Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.605920441Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=468.312µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.608656735Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.609414546Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=760.352µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.611794199Z level=info msg="Executing migration" id="add action column to seed_assignment"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.620714349Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=8.902989ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.623404821Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.631750075Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.340053ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.633891071Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.635134605Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.243584ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.636988634Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.720094973Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=83.098199ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.72336375Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.72445706Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.09442ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.726417072Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.727329257Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=911.835µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.730132442Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.753853298Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=23.715386ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.757843306Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.765573462Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.731927ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.767967876Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.768343176Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=375.8µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.770041492Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.770278639Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=237.137µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.771865942Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.772106438Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=240.926µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.77406474Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.774347797Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=283.317µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.776124185Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.776387592Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=260.657µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.778305714Z level=info msg="Executing migration" id="create folder table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.779225488Z level=info msg="Migration successfully executed" id="create folder table" duration=920.174µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.780982816Z level=info msg="Executing migration" id="Add index for parent_uid"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.782026213Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.043127ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.784423958Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.785356733Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=933.144µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.787604793Z level=info msg="Executing migration" id="Update folder title length"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.787664425Z level=info msg="Migration successfully executed" id="Update folder title length" duration=60.372µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.789250648Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.790244174Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=996.157µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.792560676Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.793874941Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.315025ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.795752922Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.79684552Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.091608ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.798938777Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.799412549Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=473.902µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.80092198Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.801174677Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=252.927µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.802970226Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.803991273Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.020908ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.805971326Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.806994553Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.023077ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.808517184Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.809404278Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=887.354µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.811185166Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.812167702Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=981.756µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.813908529Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.814895415Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=987.346µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.816686143Z level=info msg="Executing migration" id="create anon_device table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.81769831Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.011967ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.819663783Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.820913716Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.249723ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.823820764Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.824734768Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=913.544µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.827086092Z level=info msg="Executing migration" id="create signing_key table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.828028457Z level=info msg="Migration successfully executed" id="create signing_key table" duration=944.425µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.830886614Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.831777508Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=892.404µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.834134301Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.835173199Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.039007ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.836886945Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.837133411Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=247.206µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.83967395Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.846227305Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=6.552475ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.848393783Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.849053271Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=659.688µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.851178998Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.852157444Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=975.586µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.854305882Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.855761101Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.417218ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.857564799Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.858627338Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.060989ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.860667922Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.861715751Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.047529ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.863718874Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.86465876Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=940.156µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.866391365Z level=info msg="Executing migration" id="create sso_setting table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.867329701Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=938.616µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.869605761Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.87026493Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=659.589µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.872114219Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.872355845Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=241.966µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.874268247Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.874357859Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=90.252µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.876749274Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.883297029Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=6.546986ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.885252511Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.892154577Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.902316ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.893892073Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.894254603Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=362.69µs
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.896099262Z level=info msg="migrations completed" performed=547 skipped=0 duration=2.798020745s
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=sqlstore t=2025-12-06T09:42:59.897254273Z level=info msg="Created default organization"
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=secrets t=2025-12-06T09:42:59.899418391Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=plugin.store t=2025-12-06T09:42:59.917468316Z level=info msg="Loading plugins..."
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=local.finder t=2025-12-06T09:42:59.993614757Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=plugin.store t=2025-12-06T09:42:59.993737981Z level=info msg="Plugins loaded" count=55 duration=76.270195ms
Dec  6 04:42:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=query_data t=2025-12-06T09:42:59.996431233Z level=info msg="Query Service initialization"
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=live.push_http t=2025-12-06T09:43:00.00978379Z level=info msg="Live Push Gateway initialization"
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.migration t=2025-12-06T09:43:00.013936102Z level=info msg=Starting
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.migration t=2025-12-06T09:43:00.014668581Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.migration orgID=1 t=2025-12-06T09:43:00.015390931Z level=info msg="Migrating alerts for organisation"
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.migration orgID=1 t=2025-12-06T09:43:00.016744487Z level=info msg="Alerts found to migrate" alerts=0
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.migration t=2025-12-06T09:43:00.019956383Z level=info msg="Completed alerting migration"
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:00 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c001820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.state.manager t=2025-12-06T09:43:00.055274721Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec  6 04:43:00 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=infra.usagestats.collector t=2025-12-06T09:43:00.058940349Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=provisioning.datasources t=2025-12-06T09:43:00.06120423Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Dec  6 04:43:00 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=provisioning.alerting t=2025-12-06T09:43:00.07838636Z level=info msg="starting to provision alerting"
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=provisioning.alerting t=2025-12-06T09:43:00.078416071Z level=info msg="finished to provision alerting"
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=grafanaStorageLogger t=2025-12-06T09:43:00.078647107Z level=info msg="Storage starting"
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.state.manager t=2025-12-06T09:43:00.079259293Z level=info msg="Warming state cache for startup"
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.multiorg.alertmanager t=2025-12-06T09:43:00.081109203Z level=info msg="Starting MultiOrg Alertmanager"
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=http.server t=2025-12-06T09:43:00.085730677Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=http.server t=2025-12-06T09:43:00.086271172Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=sqlstore.transactions t=2025-12-06T09:43:00.091083671Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=sqlstore.transactions t=2025-12-06T09:43:00.102466036Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.state.manager t=2025-12-06T09:43:00.124714623Z level=info msg="State cache has been initialized" states=0 duration=45.45078ms
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.scheduler t=2025-12-06T09:43:00.124781575Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ticker t=2025-12-06T09:43:00.124864357Z level=info msg=starting first_tick=2025-12-06T09:43:10Z
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=provisioning.dashboard t=2025-12-06T09:43:00.14328442Z level=info msg="starting to provision dashboards"
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=plugins.update.checker t=2025-12-06T09:43:00.202660192Z level=info msg="Update check succeeded" duration=119.973726ms
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=grafana.update.checker t=2025-12-06T09:43:00.206254509Z level=info msg="Update check succeeded" duration=125.855205ms
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=sqlstore.transactions t=2025-12-06T09:43:00.218804785Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec  6 04:43:00 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 26 completed events
Dec  6 04:43:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:43:00 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:00 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:00 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:00 np0005548915 ceph-mon[74327]: Deploying daemon haproxy.rgw.default.compute-2.mwbfro on compute-2
Dec  6 04:43:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:43:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=grafana-apiserver t=2025-12-06T09:43:00.353347223Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=grafana-apiserver t=2025-12-06T09:43:00.354184165Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=provisioning.dashboard t=2025-12-06T09:43:00.372278071Z level=info msg="finished to provision dashboards"
Dec  6 04:43:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:00 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:01 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v86: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 45 op/s; 106 B/s, 5 objects/s recovering
Dec  6 04:43:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  6 04:43:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  6 04:43:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec  6 04:43:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  6 04:43:01 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec  6 04:43:01 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec  6 04:43:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:01 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:43:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:01.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:43:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:02 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:02 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec  6 04:43:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:02 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c001820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec  6 04:43:02 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec  6 04:43:02 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:02 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  6 04:43:02 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  6 04:43:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  6 04:43:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  6 04:43:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec  6 04:43:02 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec  6 04:43:03 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v88: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 44 op/s; 104 B/s, 5 objects/s recovering
Dec  6 04:43:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Dec  6 04:43:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  6 04:43:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  6 04:43:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  6 04:43:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:03.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:03 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec  6 04:43:03 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec  6 04:43:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:43:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:43:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  6 04:43:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Dec  6 04:43:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:03 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 04:43:03 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 04:43:03 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 04:43:03 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 04:43:03 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.mycoxk on compute-0
Dec  6 04:43:03 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.mycoxk on compute-0
Dec  6 04:43:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:03 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:03.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:03 np0005548915 podman[98194]: 2025-12-06 09:43:03.758241522 +0000 UTC m=+0.027501769 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  6 04:43:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:04 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:04 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec  6 04:43:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:04 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v89: 337 pgs: 4 unknown, 333 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 4 B/s, 0 objects/s recovering
Dec  6 04:43:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:05.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:05 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:05.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec  6 04:43:05 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 76 pg[6.e( v 50'39 (0'0,50'39] local-lis/les=62/63 n=1 ec=54/21 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=8.332633018s) [0] r=-1 lpr=76 pi=[62,76)/1 crt=50'39 mlcod 50'39 active pruub 216.538146973s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:05 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 76 pg[6.e( v 50'39 (0'0,50'39] local-lis/les=62/63 n=1 ec=54/21 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=8.332432747s) [0] r=-1 lpr=76 pi=[62,76)/1 crt=50'39 mlcod 0'0 unknown NOTIFY pruub 216.538146973s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:05 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 76 pg[6.6( v 50'39 (0'0,50'39] local-lis/les=62/63 n=1 ec=54/21 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=8.332157135s) [0] r=-1 lpr=76 pi=[62,76)/1 crt=50'39 mlcod 50'39 active pruub 216.538192749s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:05 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 76 pg[6.6( v 50'39 (0'0,50'39] local-lis/les=62/63 n=1 ec=54/21 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=8.332110405s) [0] r=-1 lpr=76 pi=[62,76)/1 crt=50'39 mlcod 0'0 unknown NOTIFY pruub 216.538192749s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:05 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Dec  6 04:43:05 np0005548915 podman[98194]: 2025-12-06 09:43:05.894884632 +0000 UTC m=+2.164144849 container create 97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_brahmagupta, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., name=keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=keepalived for Ceph)
Dec  6 04:43:05 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: Deploying daemon keepalived.rgw.default.compute-0.mycoxk on compute-0
Dec  6 04:43:05 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec  6 04:43:05 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec  6 04:43:05 np0005548915 systemd[1]: Started libpod-conmon-97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae.scope.
Dec  6 04:43:05 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 77 pg[10.16( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=77) [1] r=0 lpr=77 pi=[68,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:05 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 77 pg[10.e( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=77) [1] r=0 lpr=77 pi=[68,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:05 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 77 pg[10.6( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=77) [1] r=0 lpr=77 pi=[68,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:05 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 77 pg[10.1e( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=77) [1] r=0 lpr=77 pi=[68,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:05 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:05 np0005548915 podman[98194]: 2025-12-06 09:43:05.999092946 +0000 UTC m=+2.268353193 container init 97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_brahmagupta, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc.)
Dec  6 04:43:06 np0005548915 podman[98194]: 2025-12-06 09:43:06.01373983 +0000 UTC m=+2.283000057 container start 97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_brahmagupta, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=2.2.4, vcs-type=git, name=keepalived, description=keepalived for Ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, release=1793)
Dec  6 04:43:06 np0005548915 podman[98194]: 2025-12-06 09:43:06.017639074 +0000 UTC m=+2.286899331 container attach 97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_brahmagupta, version=2.2.4, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, release=1793, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=)
Dec  6 04:43:06 np0005548915 wizardly_brahmagupta[98213]: 0 0
Dec  6 04:43:06 np0005548915 systemd[1]: libpod-97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae.scope: Deactivated successfully.
Dec  6 04:43:06 np0005548915 podman[98194]: 2025-12-06 09:43:06.020199863 +0000 UTC m=+2.289460090 container died 97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_brahmagupta, io.openshift.tags=Ceph keepalived, architecture=x86_64, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., release=1793, io.buildah.version=1.28.2, distribution-scope=public, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec  6 04:43:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:06 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:06 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f93fe5beea4581dd2195d9d8fb382a78eb78182bf2aaf7926f27d05b24685476-merged.mount: Deactivated successfully.
Dec  6 04:43:06 np0005548915 podman[98194]: 2025-12-06 09:43:06.070637275 +0000 UTC m=+2.339897502 container remove 97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_brahmagupta, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., description=keepalived for Ceph, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, version=2.2.4, vcs-type=git, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, com.redhat.component=keepalived-container)
Dec  6 04:43:06 np0005548915 systemd[1]: libpod-conmon-97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae.scope: Deactivated successfully.
Dec  6 04:43:06 np0005548915 systemd[1]: Reloading.
Dec  6 04:43:06 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:43:06 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:43:06 np0005548915 systemd[1]: Reloading.
Dec  6 04:43:06 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:43:06 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:43:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:06 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c0030a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:06 np0005548915 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.mycoxk for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:43:06 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  6 04:43:06 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  6 04:43:06 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec  6 04:43:06 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec  6 04:43:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec  6 04:43:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec  6 04:43:06 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec  6 04:43:06 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.16( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:06 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.16( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:06 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.e( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:06 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.e( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:06 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.6( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:06 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.6( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:06 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.1e( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:06 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.1e( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v92: 337 pgs: 2 active+clean+scrubbing, 4 unknown, 331 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:43:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:07.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:07 np0005548915 podman[98359]: 2025-12-06 09:43:07.080142983 +0000 UTC m=+0.054269775 container create 2a2c7e80a0d1eda405007bea3b6eab51637a7245fe52791289026e5bfa50f99c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vcs-type=git, io.openshift.tags=Ceph keepalived, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, release=1793, architecture=x86_64, io.openshift.expose-services=, description=keepalived for Ceph)
Dec  6 04:43:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e4084d4ffe5a66ebdaa93523f2bb714525829da8d3d9e0aaaea12ffcc4dfb0c/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:07 np0005548915 podman[98359]: 2025-12-06 09:43:07.059136471 +0000 UTC m=+0.033263233 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  6 04:43:07 np0005548915 podman[98359]: 2025-12-06 09:43:07.159399079 +0000 UTC m=+0.133525911 container init 2a2c7e80a0d1eda405007bea3b6eab51637a7245fe52791289026e5bfa50f99c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk, vcs-type=git, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  6 04:43:07 np0005548915 podman[98359]: 2025-12-06 09:43:07.164805784 +0000 UTC m=+0.138932566 container start 2a2c7e80a0d1eda405007bea3b6eab51637a7245fe52791289026e5bfa50f99c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vcs-type=git, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, release=1793, architecture=x86_64)
Dec  6 04:43:07 np0005548915 bash[98359]: 2a2c7e80a0d1eda405007bea3b6eab51637a7245fe52791289026e5bfa50f99c
Dec  6 04:43:07 np0005548915 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.mycoxk for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:43:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:07 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec  6 04:43:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec  6 04:43:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec  6 04:43:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: Configuration file /etc/keepalived/keepalived.conf
Dec  6 04:43:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Dec  6 04:43:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec  6 04:43:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: Starting VRRP child process, pid=4
Dec  6 04:43:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: Startup complete
Dec  6 04:43:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:43:07 2025: (VI_0) Entering BACKUP STATE
Dec  6 04:43:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: (VI_0) Entering BACKUP STATE (init)
Dec  6 04:43:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: VRRP_Script(check_backend) succeeded
Dec  6 04:43:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:43:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:07.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:07 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:43:07 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  6 04:43:07 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:07 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 04:43:07 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 04:43:07 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 04:43:07 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 04:43:07 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.yurwwh on compute-2
Dec  6 04:43:07 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.yurwwh on compute-2
Dec  6 04:43:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:43:07 2025: (VI_0) Entering MASTER STATE
Dec  6 04:43:07 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec  6 04:43:07 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec  6 04:43:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec  6 04:43:07 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:07 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:07 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:07 np0005548915 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 04:43:07 np0005548915 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 04:43:07 np0005548915 ceph-mon[74327]: Deploying daemon keepalived.rgw.default.compute-2.yurwwh on compute-2
Dec  6 04:43:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec  6 04:43:07 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec  6 04:43:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:08 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c0030a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:08 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:08 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Dec  6 04:43:08 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Dec  6 04:43:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec  6 04:43:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec  6 04:43:08 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec  6 04:43:09 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:09 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:09 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:09 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:09 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=4 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:09 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=4 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:09 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:09 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:09 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v95: 337 pgs: 2 active+clean+scrubbing, 4 unknown, 331 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:43:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:09.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:43:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:43:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  6 04:43:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:09 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 8b961573-5d9a-4966-9430-80966b578f70 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec  6 04:43:09 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 8b961573-5d9a-4966-9430-80966b578f70 (Updating ingress.rgw.default deployment (+4 -> 4)) in 12 seconds
Dec  6 04:43:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  6 04:43:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:09 np0005548915 ceph-mgr[74618]: [progress INFO root] update: starting ev 12403888-f638-4724-bb9d-df3242ef47cd (Updating prometheus deployment (+1 -> 1))
Dec  6 04:43:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:09 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92180016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:09.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:09 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Dec  6 04:43:09 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Dec  6 04:43:09 np0005548915 systemd-logind[795]: New session 37 of user zuul.
Dec  6 04:43:09 np0005548915 systemd[1]: Started Session 37 of User zuul.
Dec  6 04:43:10 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.d scrub starts
Dec  6 04:43:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec  6 04:43:10 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:10 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:10 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:10 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:10 np0005548915 ceph-mon[74327]: Deploying daemon prometheus.compute-0 on compute-0
Dec  6 04:43:10 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.d scrub ok
Dec  6 04:43:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec  6 04:43:10 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec  6 04:43:10 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 81 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=4 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:10 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 81 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=6 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:10 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 81 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=5 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:10 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 81 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=5 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:10 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 27 completed events
Dec  6 04:43:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:43:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:10 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event ab5dd157-2fbc-4f6e-89d0-89ada306a67b (Global Recovery Event) in 20 seconds
Dec  6 04:43:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:10 np0005548915 python3.9[98645]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:43:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:10 2025: (VI_0) Entering MASTER STATE
Dec  6 04:43:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:43:10 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Dec  6 04:43:11 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Dec  6 04:43:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v97: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 66 op/s; 312 B/s, 16 objects/s recovering
Dec  6 04:43:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Dec  6 04:43:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  6 04:43:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec  6 04:43:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  6 04:43:11 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:11 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  6 04:43:11 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  6 04:43:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:43:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:11.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:43:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:11 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:43:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:11.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:43:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec  6 04:43:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  6 04:43:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  6 04:43:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec  6 04:43:11 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec  6 04:43:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 82 pg[10.7( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 82 pg[10.1f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 82 pg[6.8( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=82) [1] r=0 lpr=82 pi=[54,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 82 pg[10.f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:11 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 82 pg[10.17( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:12 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Dec  6 04:43:12 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Dec  6 04:43:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:12 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92180016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:12 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  6 04:43:12 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  6 04:43:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec  6 04:43:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec  6 04:43:12 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.17( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:12 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.17( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:12 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec  6 04:43:12 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:12 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:12 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.1f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:12 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.1f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:12 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.7( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:12 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.7( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:12 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[6.8( v 50'39 (0'0,50'39] local-lis/les=82/83 n=0 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=82) [1] r=0 lpr=82 pi=[54,82)/1 crt=50'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:12 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:12 np0005548915 python3.9[98977]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:43:12 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Dec  6 04:43:13 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Dec  6 04:43:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v100: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 66 op/s; 315 B/s, 16 objects/s recovering
Dec  6 04:43:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Dec  6 04:43:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  6 04:43:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec  6 04:43:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  6 04:43:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:43:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:13.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:43:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:13 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:13 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  6 04:43:13 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  6 04:43:13 np0005548915 podman[98534]: 2025-12-06 09:43:13.261314882 +0000 UTC m=+3.296148292 volume create d2cfd66e88c0603ca7839a87101328dec9ac72785f536bb929e344840b6b9a1d
Dec  6 04:43:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:13.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:13 np0005548915 podman[98534]: 2025-12-06 09:43:13.270331532 +0000 UTC m=+3.305164942 container create 02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd (image=quay.io/prometheus/prometheus:v2.51.0, name=pensive_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:13 np0005548915 podman[98534]: 2025-12-06 09:43:13.247249457 +0000 UTC m=+3.282082887 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec  6 04:43:13 np0005548915 systemd[1]: Started libpod-conmon-02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd.scope.
Dec  6 04:43:13 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be2ea4f9e55598e22892264736e544d463b2f026eba43a357ac8056878ac7f7/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec  6 04:43:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  6 04:43:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  6 04:43:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec  6 04:43:13 np0005548915 podman[98534]: 2025-12-06 09:43:13.412077042 +0000 UTC m=+3.446910472 container init 02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd (image=quay.io/prometheus/prometheus:v2.51.0, name=pensive_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:13 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec  6 04:43:13 np0005548915 podman[98534]: 2025-12-06 09:43:13.425825419 +0000 UTC m=+3.460658839 container start 02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd (image=quay.io/prometheus/prometheus:v2.51.0, name=pensive_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:13 np0005548915 pensive_taussig[99118]: 65534 65534
Dec  6 04:43:13 np0005548915 podman[98534]: 2025-12-06 09:43:13.43075447 +0000 UTC m=+3.465587880 container attach 02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd (image=quay.io/prometheus/prometheus:v2.51.0, name=pensive_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:13 np0005548915 systemd[1]: libpod-02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd.scope: Deactivated successfully.
Dec  6 04:43:13 np0005548915 podman[98534]: 2025-12-06 09:43:13.433221841 +0000 UTC m=+3.468055271 container died 02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd (image=quay.io/prometheus/prometheus:v2.51.0, name=pensive_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:13 np0005548915 systemd[1]: var-lib-containers-storage-overlay-8be2ea4f9e55598e22892264736e544d463b2f026eba43a357ac8056878ac7f7-merged.mount: Deactivated successfully.
Dec  6 04:43:13 np0005548915 podman[98534]: 2025-12-06 09:43:13.488213608 +0000 UTC m=+3.523047018 container remove 02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd (image=quay.io/prometheus/prometheus:v2.51.0, name=pensive_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:13 np0005548915 podman[98534]: 2025-12-06 09:43:13.492363087 +0000 UTC m=+3.527196497 volume remove d2cfd66e88c0603ca7839a87101328dec9ac72785f536bb929e344840b6b9a1d
Dec  6 04:43:13 np0005548915 systemd[1]: libpod-conmon-02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd.scope: Deactivated successfully.
Dec  6 04:43:13 np0005548915 podman[99134]: 2025-12-06 09:43:13.568712871 +0000 UTC m=+0.041667313 volume create 092d808e2d3d866ade41ca0fff0584cbdcc946bbcbc37d6e9af37621503982e7
Dec  6 04:43:13 np0005548915 podman[99134]: 2025-12-06 09:43:13.583397684 +0000 UTC m=+0.056352106 container create a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369 (image=quay.io/prometheus/prometheus:v2.51.0, name=dreamy_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:13 np0005548915 systemd[1]: Started libpod-conmon-a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369.scope.
Dec  6 04:43:13 np0005548915 podman[99134]: 2025-12-06 09:43:13.553634265 +0000 UTC m=+0.026588707 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec  6 04:43:13 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a48a945713c1757c1192bfe2eaa73ff3f7feaf7af35dce3ff8af657c3ec64f/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:13 np0005548915 podman[99134]: 2025-12-06 09:43:13.668400386 +0000 UTC m=+0.141354848 container init a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369 (image=quay.io/prometheus/prometheus:v2.51.0, name=dreamy_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:13 np0005548915 podman[99134]: 2025-12-06 09:43:13.677465007 +0000 UTC m=+0.150419439 container start a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369 (image=quay.io/prometheus/prometheus:v2.51.0, name=dreamy_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:13 np0005548915 dreamy_payne[99150]: 65534 65534
Dec  6 04:43:13 np0005548915 systemd[1]: libpod-a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369.scope: Deactivated successfully.
Dec  6 04:43:13 np0005548915 podman[99134]: 2025-12-06 09:43:13.681632798 +0000 UTC m=+0.154587280 container attach a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369 (image=quay.io/prometheus/prometheus:v2.51.0, name=dreamy_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:13 np0005548915 podman[99134]: 2025-12-06 09:43:13.682158803 +0000 UTC m=+0.155113235 container died a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369 (image=quay.io/prometheus/prometheus:v2.51.0, name=dreamy_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:13 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f3a48a945713c1757c1192bfe2eaa73ff3f7feaf7af35dce3ff8af657c3ec64f-merged.mount: Deactivated successfully.
Dec  6 04:43:13 np0005548915 podman[99134]: 2025-12-06 09:43:13.727068749 +0000 UTC m=+0.200023181 container remove a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369 (image=quay.io/prometheus/prometheus:v2.51.0, name=dreamy_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:13 np0005548915 podman[99134]: 2025-12-06 09:43:13.733424512 +0000 UTC m=+0.206378944 volume remove 092d808e2d3d866ade41ca0fff0584cbdcc946bbcbc37d6e9af37621503982e7
Dec  6 04:43:13 np0005548915 systemd[1]: libpod-conmon-a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369.scope: Deactivated successfully.
Dec  6 04:43:13 np0005548915 systemd[1]: Reloading.
Dec  6 04:43:13 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:43:13 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:43:14 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.1b deep-scrub starts
Dec  6 04:43:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:14 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:14 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 84 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=84 pruub=10.920730591s) [0] r=-1 lpr=84 pi=[58,84)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 227.311737061s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:14 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 84 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=84 pruub=10.919921875s) [0] r=-1 lpr=84 pi=[58,84)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.311737061s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:14 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 84 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=84 pruub=10.915806770s) [0] r=-1 lpr=84 pi=[58,84)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 227.307769775s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:14 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 84 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=84 pruub=10.915764809s) [0] r=-1 lpr=84 pi=[58,84)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.307769775s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:14 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.1b deep-scrub ok
Dec  6 04:43:14 np0005548915 systemd[1]: Reloading.
Dec  6 04:43:14 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:43:14 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec  6 04:43:14 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:14 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:14 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=85) [0]/[1] r=0 lpr=85 pi=[58,85)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:14 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=85) [0]/[1] r=0 lpr=85 pi=[58,85)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:14 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:14 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:14 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=85) [0]/[1] r=0 lpr=85 pi=[58,85)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:14 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=85) [0]/[1] r=0 lpr=85 pi=[58,85)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.510307) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014194510442, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7741, "num_deletes": 251, "total_data_size": 15010882, "memory_usage": 15756392, "flush_reason": "Manual Compaction"}
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec  6 04:43:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:14 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:14 np0005548915 systemd[1]: Starting Ceph prometheus.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014194673234, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 12985715, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7878, "table_properties": {"data_size": 12957477, "index_size": 18011, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9157, "raw_key_size": 87831, "raw_average_key_size": 24, "raw_value_size": 12887823, "raw_average_value_size": 3544, "num_data_blocks": 798, "num_entries": 3636, "num_filter_entries": 3636, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013863, "oldest_key_time": 1765013863, "file_creation_time": 1765014194, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 162970 microseconds, and 23958 cpu microseconds.
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.673291) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 12985715 bytes OK
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.673313) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.678578) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.678629) EVENT_LOG_v1 {"time_micros": 1765014194678618, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.678664) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 14975896, prev total WAL file size 14975896, number of live WAL files 2.
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.683083) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(12MB) 13(57KB) 8(1944B)]
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014194683304, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 13046145, "oldest_snapshot_seqno": -1}
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3453 keys, 12999783 bytes, temperature: kUnknown
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014194881597, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 12999783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12972032, "index_size": 18041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 85976, "raw_average_key_size": 24, "raw_value_size": 12903994, "raw_average_value_size": 3737, "num_data_blocks": 801, "num_entries": 3453, "num_filter_entries": 3453, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765014194, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.882025) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 12999783 bytes
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.888520) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.8 rd, 65.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(12.4, 0.0 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3745, records dropped: 292 output_compression: NoCompression
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.888587) EVENT_LOG_v1 {"time_micros": 1765014194888558, "job": 4, "event": "compaction_finished", "compaction_time_micros": 198418, "compaction_time_cpu_micros": 38953, "output_level": 6, "num_output_files": 1, "total_output_size": 12999783, "num_input_records": 3745, "num_output_records": 3453, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014194892970, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014194893140, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014194893257, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec  6 04:43:14 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.682852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:43:14 np0005548915 podman[99297]: 2025-12-06 09:43:14.918424368 +0000 UTC m=+0.049865189 container create cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:14 np0005548915 podman[99297]: 2025-12-06 09:43:14.895149027 +0000 UTC m=+0.026589898 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec  6 04:43:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1dad53dfb2f070967372d35b973ddc922d0cc08f86e02ac04dcdb3044413b5e/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1dad53dfb2f070967372d35b973ddc922d0cc08f86e02ac04dcdb3044413b5e/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:15 np0005548915 podman[99297]: 2025-12-06 09:43:15.009722402 +0000 UTC m=+0.141163263 container init cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:15 np0005548915 podman[99297]: 2025-12-06 09:43:15.016362133 +0000 UTC m=+0.147802954 container start cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v103: 337 pgs: 2 active+remapped, 335 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 2 objects/s recovering
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  6 04:43:15 np0005548915 bash[99297]: cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e
Dec  6 04:43:15 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Dec  6 04:43:15 np0005548915 systemd[1]: Started Ceph prometheus.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:43:15 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.061Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.061Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.061Z caller=main.go:623 level=info host_details="(Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 x86_64 compute-0 (none))"
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.061Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.061Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.068Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.068Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Dec  6 04:43:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:15.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.072Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.072Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.53µs
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.072Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.072Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.072Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=35.021µs wal_replay_duration=593.417µs wbl_replay_duration=160ns total_replay_duration=656.729µs
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.073Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.073Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.077Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.077Z caller=main.go:1153 level=info msg="TSDB started"
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.077Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.110Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=33.444545ms db_storage=1.84µs remote_storage=3.76µs web_handler=1.15µs query_engine=1.96µs scrape=2.841362ms scrape_sd=478.114µs notify=49.852µs notify_sd=267.367µs rules=28.715908ms tracing=18.2µs
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.110Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.110Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:15 np0005548915 ceph-mgr[74618]: [progress INFO root] complete: finished ev 12403888-f638-4724-bb9d-df3242ef47cd (Updating prometheus deployment (+1 -> 1))
Dec  6 04:43:15 np0005548915 ceph-mgr[74618]: [progress INFO root] Completed event 12403888-f638-4724-bb9d-df3242ef47cd (Updating prometheus deployment (+1 -> 1)) in 6 seconds
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec  6 04:43:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:15 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92180016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:43:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:15.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:43:15 np0005548915 ceph-mgr[74618]: [progress INFO root] Writing back 29 completed events
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec  6 04:43:15 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:15 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:15 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:15 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:15 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.9( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:15 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.19( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:15 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:15 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:15 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=85) [0]/[1] async=[0] r=0 lpr=85 pi=[58,85)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:15 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=85) [0]/[1] async=[0] r=0 lpr=85 pi=[58,85)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec  6 04:43:15 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec  6 04:43:16 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.19( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[65,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:16 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.19( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[65,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:16 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=6 ec=58/45 lis/c=85/58 les/c/f=86/59/0 sis=87 pruub=15.525048256s) [0] async=[0] r=-1 lpr=87 pi=[58,87)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 233.847198486s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:16 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=6 ec=58/45 lis/c=85/58 les/c/f=86/59/0 sis=87 pruub=15.524926186s) [0] r=-1 lpr=87 pi=[58,87)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.847198486s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:16 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=85/58 les/c/f=86/59/0 sis=87 pruub=15.522101402s) [0] async=[0] r=-1 lpr=87 pi=[58,87)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 233.843963623s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:16 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=85/58 les/c/f=86/59/0 sis=87 pruub=15.521376610s) [0] r=-1 lpr=87 pi=[58,87)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.843963623s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:16 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.9( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[65,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:16 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.9( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[65,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.007025) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014196007205, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 334, "num_deletes": 253, "total_data_size": 165864, "memory_usage": 173816, "flush_reason": "Manual Compaction"}
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec  6 04:43:16 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=86/87 n=6 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014196011262, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 165969, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7879, "largest_seqno": 8212, "table_properties": {"data_size": 163775, "index_size": 358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4699, "raw_average_key_size": 15, "raw_value_size": 159314, "raw_average_value_size": 525, "num_data_blocks": 16, "num_entries": 303, "num_filter_entries": 303, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765014195, "oldest_key_time": 1765014195, "file_creation_time": 1765014196, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 4251 microseconds, and 1396 cpu microseconds.
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 04:43:16 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=86/87 n=6 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.011296) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 165969 bytes OK
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.011312) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.012959) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.012974) EVENT_LOG_v1 {"time_micros": 1765014196012970, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.013018) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 163525, prev total WAL file size 163525, number of live WAL files 2.
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.013537) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323534' seq:0, type:0; will stop at (end)
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(162KB)], [20(12MB)]
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014196013603, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 13165752, "oldest_snapshot_seqno": -1}
Dec  6 04:43:16 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Dec  6 04:43:16 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Dec  6 04:43:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:16 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3233 keys, 12743077 bytes, temperature: kUnknown
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014196148111, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 12743077, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12716654, "index_size": 17225, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 83256, "raw_average_key_size": 25, "raw_value_size": 12652293, "raw_average_value_size": 3913, "num_data_blocks": 748, "num_entries": 3233, "num_filter_entries": 3233, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765014196, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.150395) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12743077 bytes
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.153172) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.8 rd, 94.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.4 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(156.1) write-amplify(76.8) OK, records in: 3756, records dropped: 523 output_compression: NoCompression
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.153201) EVENT_LOG_v1 {"time_micros": 1765014196153188, "job": 6, "event": "compaction_finished", "compaction_time_micros": 134608, "compaction_time_cpu_micros": 45149, "output_level": 6, "num_output_files": 1, "total_output_size": 12743077, "num_input_records": 3756, "num_output_records": 3233, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014196153380, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014196155610, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.013399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.155663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.155667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.155669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.155670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.155672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr respawn  1: '-n'
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr respawn  2: 'mgr.compute-0.qhdjwa'
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr respawn  3: '-f'
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr respawn  4: '--setuser'
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr respawn  5: 'ceph'
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr respawn  6: '--setgroup'
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr respawn  7: 'ceph'
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr respawn  8: '--default-log-to-file=false'
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr respawn  9: '--default-log-to-journald=true'
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr respawn  exe_path /proc/self/exe
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.qhdjwa(active, since 107s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:43:16 np0005548915 systemd[1]: session-35.scope: Deactivated successfully.
Dec  6 04:43:16 np0005548915 systemd[1]: session-35.scope: Consumed 55.388s CPU time.
Dec  6 04:43:16 np0005548915 systemd-logind[795]: Session 35 logged out. Waiting for processes to exit.
Dec  6 04:43:16 np0005548915 systemd-logind[795]: Removed session 35.
Dec  6 04:43:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setuser ceph since I am not root
Dec  6 04:43:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setgroup ceph since I am not root
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: pidfile_write: ignore empty --pid-file
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'alerts'
Dec  6 04:43:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:16.427+0000 7f364ff49140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'balancer'
Dec  6 04:43:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:16.507+0000 7f364ff49140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  6 04:43:16 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'cephadm'
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec  6 04:43:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:16 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec  6 04:43:17 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.f scrub starts
Dec  6 04:43:17 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.f scrub ok
Dec  6 04:43:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:43:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:17.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:43:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:17 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92100016a0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:43:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:17.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:43:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec  6 04:43:17 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec  6 04:43:17 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'crash'
Dec  6 04:43:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:17.422+0000 7f364ff49140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  6 04:43:17 np0005548915 ceph-mgr[74618]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  6 04:43:17 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'dashboard'
Dec  6 04:43:18 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Dec  6 04:43:18 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'devicehealth'
Dec  6 04:43:18 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Dec  6 04:43:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:18 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:18.105+0000 7f364ff49140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  6 04:43:18 np0005548915 ceph-mgr[74618]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  6 04:43:18 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'diskprediction_local'
Dec  6 04:43:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  6 04:43:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  6 04:43:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:  from numpy import show_config as show_numpy_config
Dec  6 04:43:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:18.288+0000 7f364ff49140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  6 04:43:18 np0005548915 ceph-mgr[74618]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  6 04:43:18 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'influx'
Dec  6 04:43:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec  6 04:43:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec  6 04:43:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec  6 04:43:18 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 89 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=87/65 les/c/f=88/66/0 sis=89) [1] r=0 lpr=89 pi=[65,89)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:18 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 89 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=87/65 les/c/f=88/66/0 sis=89) [1] r=0 lpr=89 pi=[65,89)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:18 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 89 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=87/65 les/c/f=88/66/0 sis=89) [1] r=0 lpr=89 pi=[65,89)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:18 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 89 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=87/65 les/c/f=88/66/0 sis=89) [1] r=0 lpr=89 pi=[65,89)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:18.367+0000 7f364ff49140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  6 04:43:18 np0005548915 ceph-mgr[74618]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  6 04:43:18 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'insights'
Dec  6 04:43:18 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'iostat'
Dec  6 04:43:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:18.540+0000 7f364ff49140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  6 04:43:18 np0005548915 ceph-mgr[74618]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  6 04:43:18 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'k8sevents'
Dec  6 04:43:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:18 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:19 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'localpool'
Dec  6 04:43:19 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec  6 04:43:19 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec  6 04:43:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:19.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:19 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'mds_autoscaler'
Dec  6 04:43:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:19 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:43:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:19.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:43:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec  6 04:43:19 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'mirroring'
Dec  6 04:43:19 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'nfs'
Dec  6 04:43:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:19.674+0000 7f364ff49140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  6 04:43:19 np0005548915 ceph-mgr[74618]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  6 04:43:19 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'orchestrator'
Dec  6 04:43:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:19.903+0000 7f364ff49140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  6 04:43:19 np0005548915 ceph-mgr[74618]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  6 04:43:19 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'osd_perf_query'
Dec  6 04:43:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:19.984+0000 7f364ff49140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  6 04:43:19 np0005548915 ceph-mgr[74618]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  6 04:43:19 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'osd_support'
Dec  6 04:43:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:20.054+0000 7f364ff49140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  6 04:43:20 np0005548915 ceph-mgr[74618]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  6 04:43:20 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'pg_autoscaler'
Dec  6 04:43:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:20 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92100016a0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:20 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec  6 04:43:20 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec  6 04:43:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:20.139+0000 7f364ff49140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  6 04:43:20 np0005548915 ceph-mgr[74618]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  6 04:43:20 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'progress'
Dec  6 04:43:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:20.227+0000 7f364ff49140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  6 04:43:20 np0005548915 ceph-mgr[74618]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  6 04:43:20 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'prometheus'
Dec  6 04:43:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec  6 04:43:20 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec  6 04:43:20 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 90 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=89/90 n=6 ec=58/45 lis/c=87/65 les/c/f=88/66/0 sis=89) [1] r=0 lpr=89 pi=[65,89)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:20 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 90 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=89/90 n=5 ec=58/45 lis/c=87/65 les/c/f=88/66/0 sis=89) [1] r=0 lpr=89 pi=[65,89)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:20 np0005548915 systemd[1]: session-37.scope: Deactivated successfully.
Dec  6 04:43:20 np0005548915 systemd[1]: session-37.scope: Consumed 8.697s CPU time.
Dec  6 04:43:20 np0005548915 systemd-logind[795]: Session 37 logged out. Waiting for processes to exit.
Dec  6 04:43:20 np0005548915 systemd-logind[795]: Removed session 37.
Dec  6 04:43:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:20.617+0000 7f364ff49140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  6 04:43:20 np0005548915 ceph-mgr[74618]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  6 04:43:20 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rbd_support'
Dec  6 04:43:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:20 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:20.718+0000 7f364ff49140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  6 04:43:20 np0005548915 ceph-mgr[74618]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  6 04:43:20 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'restful'
Dec  6 04:43:20 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rgw'
Dec  6 04:43:21 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec  6 04:43:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:21.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:21 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec  6 04:43:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:21.179+0000 7f364ff49140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  6 04:43:21 np0005548915 ceph-mgr[74618]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  6 04:43:21 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'rook'
Dec  6 04:43:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:21 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:21 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:43:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:21.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:21.807+0000 7f364ff49140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  6 04:43:21 np0005548915 ceph-mgr[74618]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  6 04:43:21 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'selftest'
Dec  6 04:43:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:21.889+0000 7f364ff49140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  6 04:43:21 np0005548915 ceph-mgr[74618]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  6 04:43:21 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'snap_schedule'
Dec  6 04:43:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:21.992+0000 7f364ff49140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  6 04:43:21 np0005548915 ceph-mgr[74618]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  6 04:43:21 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'stats'
Dec  6 04:43:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:22 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:22 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'status'
Dec  6 04:43:22 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec  6 04:43:22 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec  6 04:43:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:22.158+0000 7f364ff49140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  6 04:43:22 np0005548915 ceph-mgr[74618]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  6 04:43:22 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'telegraf'
Dec  6 04:43:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:22.238+0000 7f364ff49140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  6 04:43:22 np0005548915 ceph-mgr[74618]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  6 04:43:22 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'telemetry'
Dec  6 04:43:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:22.416+0000 7f364ff49140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  6 04:43:22 np0005548915 ceph-mgr[74618]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  6 04:43:22 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'test_orchestrator'
Dec  6 04:43:22 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn restarted
Dec  6 04:43:22 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn started
Dec  6 04:43:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:22 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92100016a0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:22.654+0000 7f364ff49140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  6 04:43:22 np0005548915 ceph-mgr[74618]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  6 04:43:22 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'volumes'
Dec  6 04:43:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:22.945+0000 7f364ff49140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  6 04:43:22 np0005548915 ceph-mgr[74618]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  6 04:43:22 np0005548915 ceph-mgr[74618]: mgr[py] Loading python module 'zabbix'
Dec  6 04:43:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:23.018+0000 7f364ff49140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qhdjwa restarted
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qhdjwa
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: ms_deliver_dispatch: unhandled message 0x555807345860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  6 04:43:23 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.5 deep-scrub starts
Dec  6 04:43:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:23.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:23 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.5 deep-scrub ok
Dec  6 04:43:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:23 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:23.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr handle_mgr_map Activating!
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr handle_mgr_map I am now activating
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.qhdjwa(active, starting, since 0.658318s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ujokui"} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ujokui"}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e10 all = 0
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.fpvjgb"} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.fpvjgb"}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e10 all = 0
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.czucwy"} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.czucwy"}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e10 all = 0
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).mds e10 all = 1
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: balancer
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Starting
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Manager daemon compute-0.qhdjwa is now available
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:43:23
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: cephadm
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: crash
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: dashboard
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [dashboard INFO access_control] Loading user roles DB version=2
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: devicehealth
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [dashboard INFO sso] Loading SSO DB version=1
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: iostat
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [devicehealth INFO root] Starting
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: nfs
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: orchestrator
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: pg_autoscaler
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: progress
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [progress INFO root] Loading...
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f35cf29ad00>, <progress.module.GhostEvent object at 0x7f35cf29af40>, <progress.module.GhostEvent object at 0x7f35cf29af70>, <progress.module.GhostEvent object at 0x7f35cf29afa0>, <progress.module.GhostEvent object at 0x7f35cf29afd0>, <progress.module.GhostEvent object at 0x7f35cf2a8040>, <progress.module.GhostEvent object at 0x7f35cf2a8070>, <progress.module.GhostEvent object at 0x7f35cf2a80a0>, <progress.module.GhostEvent object at 0x7f35cf2a80d0>, <progress.module.GhostEvent object at 0x7f35cf2a8100>, <progress.module.GhostEvent object at 0x7f35cf2a8130>, <progress.module.GhostEvent object at 0x7f35cf2a8160>, <progress.module.GhostEvent object at 0x7f35cf2a8190>, <progress.module.GhostEvent object at 0x7f35cf2a81c0>, <progress.module.GhostEvent object at 0x7f35cf2a81f0>, <progress.module.GhostEvent object at 0x7f35cf2a8220>, <progress.module.GhostEvent object at 0x7f35cf2a8250>, <progress.module.GhostEvent object at 0x7f35cf2a8280>, <progress.module.GhostEvent object at 0x7f35cf2a82b0>, <progress.module.GhostEvent object at 0x7f35cf2a82e0>, <progress.module.GhostEvent object at 0x7f35cf2a8310>, <progress.module.GhostEvent object at 0x7f35cf2a8340>, <progress.module.GhostEvent object at 0x7f35cf2a8370>, <progress.module.GhostEvent object at 0x7f35cf2a83a0>, <progress.module.GhostEvent object at 0x7f35cf2a83d0>, <progress.module.GhostEvent object at 0x7f35cf2a8400>, <progress.module.GhostEvent object at 0x7f35cf2a8430>, <progress.module.GhostEvent object at 0x7f35cf2a8460>, <progress.module.GhostEvent object at 0x7f35cf2a8490>] historic events
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [progress INFO root] Loaded OSDMap, ready.
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: Active manager daemon compute-0.qhdjwa restarted
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: Activating manager daemon compute-0.qhdjwa
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: prometheus
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [prometheus INFO root] server_addr: :: server_port: 9283
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [prometheus INFO root] Cache enabled
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [prometheus INFO root] starting metric collection thread
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [prometheus INFO root] Starting engine...
Dec  6 04:43:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:43:23] ENGINE Bus STARTING
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:43:23] ENGINE Bus STARTING
Dec  6 04:43:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: CherryPy Checker:
Dec  6 04:43:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: The Application mounted at '' has an empty config.
Dec  6 04:43:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] recovery thread starting
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] starting setup
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: rbd_support
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: restful
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid restarted
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid started
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [restful INFO root] server_addr: :: server_port: 8003
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"} v 0)
Dec  6 04:43:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: status
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: telemetry
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [restful WARNING root] server not running: no certificate configured
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] PerfHandler: starting
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  6 04:43:23 np0005548915 ceph-mgr[74618]: mgr load Constructed class from module: volumes
Dec  6 04:43:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:24.002+0000 7f35bc622640 -1 client.0 error registering admin socket command: (17) File exists
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: client.0 error registering admin socket command: (17) File exists
Dec  6 04:43:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:24.004+0000 7f35b3e11640 -1 client.0 error registering admin socket command: (17) File exists
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: client.0 error registering admin socket command: (17) File exists
Dec  6 04:43:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:24.004+0000 7f35b3e11640 -1 client.0 error registering admin socket command: (17) File exists
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: client.0 error registering admin socket command: (17) File exists
Dec  6 04:43:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:24.004+0000 7f35b3e11640 -1 client.0 error registering admin socket command: (17) File exists
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: client.0 error registering admin socket command: (17) File exists
Dec  6 04:43:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:24.004+0000 7f35b3e11640 -1 client.0 error registering admin socket command: (17) File exists
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: client.0 error registering admin socket command: (17) File exists
Dec  6 04:43:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:24.004+0000 7f35b3e11640 -1 client.0 error registering admin socket command: (17) File exists
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: client.0 error registering admin socket command: (17) File exists
Dec  6 04:43:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:43:24] ENGINE Serving on http://:::9283
Dec  6 04:43:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:43:24] ENGINE Bus STARTED
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:43:24] ENGINE Serving on http://:::9283
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:43:24] ENGINE Bus STARTED
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [prometheus INFO root] Engine started.
Dec  6 04:43:24 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec  6 04:43:24 np0005548915 systemd-logind[795]: New session 38 of user ceph-admin.
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec  6 04:43:24 np0005548915 systemd[1]: Started Session 38 of User ceph-admin.
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec  6 04:43:24 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: images, start_after=
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TaskHandler: starting
Dec  6 04:43:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"} v 0)
Dec  6 04:43:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] setup complete
Dec  6 04:43:24 np0005548915 ceph-mgr[74618]: [dashboard INFO dashboard.module] Engine started.
Dec  6 04:43:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:25 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec  6 04:43:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:25.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:25 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec  6 04:43:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:25 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:43:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:25.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:43:25 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:43:25] ENGINE Bus STARTING
Dec  6 04:43:25 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:43:25] ENGINE Bus STARTING
Dec  6 04:43:25 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:43:25] ENGINE Serving on https://192.168.122.100:7150
Dec  6 04:43:25 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:43:25] ENGINE Serving on https://192.168.122.100:7150
Dec  6 04:43:25 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:43:25] ENGINE Client ('192.168.122.100', 44988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  6 04:43:25 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:43:25] ENGINE Client ('192.168.122.100', 44988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  6 04:43:25 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:43:25] ENGINE Serving on http://192.168.122.100:8765
Dec  6 04:43:25 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:43:25] ENGINE Serving on http://192.168.122.100:8765
Dec  6 04:43:25 np0005548915 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:43:25] ENGINE Bus STARTED
Dec  6 04:43:25 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:43:25] ENGINE Bus STARTED
Dec  6 04:43:25 np0005548915 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  6 04:43:26 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec  6 04:43:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:27.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:27 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003dd0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:27.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:43:27 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec  6 04:43:27 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec  6 04:43:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:43:27 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec  6 04:43:27 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.qhdjwa(active, since 4s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:43:27 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v3: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:43:27 np0005548915 ceph-mon[74327]: Manager daemon compute-0.qhdjwa is now available
Dec  6 04:43:27 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:27 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec  6 04:43:27 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec  6 04:43:27 np0005548915 podman[99710]: 2025-12-06 09:43:27.675511237 +0000 UTC m=+2.710197087 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:43:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:43:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:27 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v4: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:43:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Dec  6 04:43:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  6 04:43:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec  6 04:43:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  6 04:43:27 np0005548915 podman[99710]: 2025-12-06 09:43:27.77858439 +0000 UTC m=+2.813270210 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  6 04:43:27 np0005548915 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec  6 04:43:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:28 np0005548915 podman[99868]: 2025-12-06 09:43:28.275924468 +0000 UTC m=+0.079167865 container exec 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:28 np0005548915 podman[99868]: 2025-12-06 09:43:28.313891443 +0000 UTC m=+0.117134850 container exec_died 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:28 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec  6 04:43:28 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e33: compute-0.qhdjwa(active, since 5s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:43:25] ENGINE Bus STARTING
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:43:25] ENGINE Serving on https://192.168.122.100:7150
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:43:25] ENGINE Client ('192.168.122.100', 44988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:43:25] ENGINE Serving on http://192.168.122.100:8765
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: [06/Dec/2025:09:43:25] ENGINE Bus STARTED
Dec  6 04:43:28 np0005548915 podman[99961]: 2025-12-06 09:43:28.724688435 +0000 UTC m=+0.100449049 container exec f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec  6 04:43:28 np0005548915 podman[99961]: 2025-12-06 09:43:28.740106579 +0000 UTC m=+0.115867183 container exec_died f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec  6 04:43:28 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 92 pg[10.1a( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=67/67 les/c/f=68/68/0 sis=92) [1] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:28 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 92 pg[10.a( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=92) [1] r=0 lpr=92 pi=[68,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:28 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 92 pg[6.b( v 50'39 (0'0,50'39] local-lis/les=64/65 n=1 ec=54/21 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=11.464945793s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=50'39 mlcod 50'39 active pruub 242.538116455s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:28 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 92 pg[6.b( v 50'39 (0'0,50'39] local-lis/les=64/65 n=1 ec=54/21 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=11.464897156s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=50'39 mlcod 0'0 unknown NOTIFY pruub 242.538116455s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 04:43:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec  6 04:43:28 np0005548915 podman[100025]: 2025-12-06 09:43:28.965581794 +0000 UTC m=+0.064011068 container exec 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 04:43:28 np0005548915 podman[100025]: 2025-12-06 09:43:28.982872413 +0000 UTC m=+0.081301667 container exec_died 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 04:43:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:29.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:29 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:29 np0005548915 podman[100093]: 2025-12-06 09:43:29.250915606 +0000 UTC m=+0.076826938 container exec d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, build-date=2023-02-22T09:23:20, name=keepalived, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, release=1793, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived)
Dec  6 04:43:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:29.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:29 np0005548915 podman[100093]: 2025-12-06 09:43:29.297437077 +0000 UTC m=+0.123348349 container exec_died d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., distribution-scope=public, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, release=1793)
Dec  6 04:43:29 np0005548915 podman[100159]: 2025-12-06 09:43:29.563707339 +0000 UTC m=+0.055732619 container exec b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:29 np0005548915 podman[100159]: 2025-12-06 09:43:29.609003396 +0000 UTC m=+0.101028666 container exec_died b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:29 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec  6 04:43:29 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec  6 04:43:29 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v7: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:43:29 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 93 pg[10.a( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=93) [1]/[0] r=-1 lpr=93 pi=[68,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  6 04:43:29 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 93 pg[10.a( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=93) [1]/[0] r=-1 lpr=93 pi=[68,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:29 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 93 pg[10.1a( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=67/67 les/c/f=68/68/0 sis=93) [1]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  6 04:43:29 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 93 pg[10.1a( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=67/67 les/c/f=68/68/0 sis=93) [1]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:29 np0005548915 podman[100233]: 2025-12-06 09:43:29.827364245 +0000 UTC m=+0.057811048 container exec cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:29 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e34: compute-0.qhdjwa(active, since 6s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:43:30 np0005548915 podman[100233]: 2025-12-06 09:43:30.00396946 +0000 UTC m=+0.234416243 container exec_died cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:30 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003df0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 04:43:30 np0005548915 podman[100341]: 2025-12-06 09:43:30.497943661 +0000 UTC m=+0.065721457 container exec cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:30 np0005548915 podman[100341]: 2025-12-06 09:43:30.551120645 +0000 UTC m=+0.118898411 container exec_died cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:43:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:30 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:30 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec  6 04:43:30 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:30 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec  6 04:43:30 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 94 pg[10.1b( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=94) [1] r=0 lpr=94 pi=[65,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:30 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 94 pg[10.b( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=94) [1] r=0 lpr=94 pi=[65,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:43:30] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Dec  6 04:43:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:43:30] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Dec  6 04:43:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:43:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:31.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:43:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:31 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:43:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:31.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:43:31 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec  6 04:43:31 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec  6 04:43:31 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v9: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec  6 04:43:31 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.1c( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=74/74 les/c/f=75/75/0 sis=95) [1] r=0 lpr=95 pi=[74,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:31 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.1b( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:31 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.b( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:31 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.1b( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:31 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.b( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:31 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.c( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=74/74 les/c/f=75/75/0 sis=95) [1] r=0 lpr=95 pi=[74,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:31 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=4 ec=58/45 lis/c=93/67 les/c/f=94/68/0 sis=95) [1] r=0 lpr=95 pi=[67,95)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:31 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=4 ec=58/45 lis/c=93/67 les/c/f=94/68/0 sis=95) [1] r=0 lpr=95 pi=[67,95)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:31 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=9 ec=58/45 lis/c=93/68 les/c/f=94/69/0 sis=95) [1] r=0 lpr=95 pi=[68,95)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:31 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=9 ec=58/45 lis/c=93/68 les/c/f=94/69/0 sis=95) [1] r=0 lpr=95 pi=[68,95)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:43:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:43:31 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  6 04:43:31 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  6 04:43:31 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  6 04:43:31 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  6 04:43:31 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  6 04:43:31 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  6 04:43:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:32 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:32 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:43:32 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:43:32 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:43:32 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:43:32 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:43:32 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:43:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:43:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec  6 04:43:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:32 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003e10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:32 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec  6 04:43:32 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec  6 04:43:33 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:43:33 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:43:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:33.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:33 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:43:33 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:43:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:33 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:43:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:33.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:43:33 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:43:33 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:43:33 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec  6 04:43:33 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:43:33 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:43:33 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v11: 337 pgs: 2 unknown, 2 remapped+peering, 2 peering, 1 active+clean+scrubbing, 330 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Dec  6 04:43:33 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:43:33 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:43:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:34 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:34 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:43:34 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:43:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:43:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:43:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:34 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:34 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.3 deep-scrub starts
Dec  6 04:43:34 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec  6 04:43:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec  6 04:43:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:35.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:35 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003e30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:43:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:35.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:43:35 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.3 deep-scrub ok
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec  6 04:43:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 96 pg[10.c( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=74/74 les/c/f=75/75/0 sis=96) [1]/[2] r=-1 lpr=96 pi=[74,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 96 pg[10.c( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=74/74 les/c/f=75/75/0 sis=96) [1]/[2] r=-1 lpr=96 pi=[74,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 96 pg[10.1c( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=74/74 les/c/f=75/75/0 sis=96) [1]/[2] r=-1 lpr=96 pi=[74,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 96 pg[10.1c( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=74/74 les/c/f=75/75/0 sis=96) [1]/[2] r=-1 lpr=96 pi=[74,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.conf
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.conf
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.conf
Dec  6 04:43:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 96 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=95/96 n=9 ec=58/45 lis/c=93/68 les/c/f=94/69/0 sis=95) [1] r=0 lpr=95 pi=[68,95)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:35 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 96 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=95/96 n=4 ec=58/45 lis/c=93/67 les/c/f=94/68/0 sis=95) [1] r=0 lpr=95 pi=[67,95)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:35 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v13: 337 pgs: 2 unknown, 2 remapped+peering, 2 peering, 1 active+clean+scrubbing, 330 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:43:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:36 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:43:36 np0005548915 systemd-logind[795]: New session 39 of user zuul.
Dec  6 04:43:36 np0005548915 systemd[1]: Started Session 39 of User zuul.
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:36 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 97 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 97 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:43:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:43:37 np0005548915 python3.9[101584]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  6 04:43:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:37.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:37 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224001040 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:37.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:37 np0005548915 podman[101756]: 2025-12-06 09:43:37.570842967 +0000 UTC m=+0.065305406 container create 06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kepler, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:43:37 np0005548915 systemd[90433]: Starting Mark boot as successful...
Dec  6 04:43:37 np0005548915 systemd[1]: Started libpod-conmon-06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe.scope.
Dec  6 04:43:37 np0005548915 systemd[90433]: Finished Mark boot as successful.
Dec  6 04:43:37 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:37 np0005548915 podman[101756]: 2025-12-06 09:43:37.542252612 +0000 UTC m=+0.036715051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:43:37 np0005548915 podman[101756]: 2025-12-06 09:43:37.642632358 +0000 UTC m=+0.137094777 container init 06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:43:37 np0005548915 podman[101756]: 2025-12-06 09:43:37.650954188 +0000 UTC m=+0.145416587 container start 06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:43:37 np0005548915 clever_kepler[101774]: 167 167
Dec  6 04:43:37 np0005548915 podman[101756]: 2025-12-06 09:43:37.657164987 +0000 UTC m=+0.151627396 container attach 06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kepler, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  6 04:43:37 np0005548915 systemd[1]: libpod-06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe.scope: Deactivated successfully.
Dec  6 04:43:37 np0005548915 podman[101756]: 2025-12-06 09:43:37.659188595 +0000 UTC m=+0.153651014 container died 06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:43:37 np0005548915 systemd[1]: var-lib-containers-storage-overlay-02035fe57f41a745e6feba48b0d1867781b6ea1d261ab74011252ce2cd9a8c72-merged.mount: Deactivated successfully.
Dec  6 04:43:37 np0005548915 podman[101756]: 2025-12-06 09:43:37.711746462 +0000 UTC m=+0.206208861 container remove 06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  6 04:43:37 np0005548915 systemd[1]: libpod-conmon-06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe.scope: Deactivated successfully.
Dec  6 04:43:37 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v15: 337 pgs: 1 peering, 3 active+remapped, 333 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 1 objects/s recovering
Dec  6 04:43:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec  6 04:43:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec  6 04:43:37 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec  6 04:43:37 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 98 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=95/65 les/c/f=96/66/0 sis=98) [1] r=0 lpr=98 pi=[65,98)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:37 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 98 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=96/74 les/c/f=97/75/0 sis=98) [1] r=0 lpr=98 pi=[74,98)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:37 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 98 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=96/74 les/c/f=97/75/0 sis=98) [1] r=0 lpr=98 pi=[74,98)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:37 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 98 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=96/74 les/c/f=97/75/0 sis=98) [1] r=0 lpr=98 pi=[74,98)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:37 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 98 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=96/74 les/c/f=97/75/0 sis=98) [1] r=0 lpr=98 pi=[74,98)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:37 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 98 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=95/65 les/c/f=96/66/0 sis=98) [1] r=0 lpr=98 pi=[65,98)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:37 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 98 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=97/98 n=5 ec=58/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:37 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:37 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:37 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:37 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:37 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:43:37 np0005548915 podman[101845]: 2025-12-06 09:43:37.887263044 +0000 UTC m=+0.059591949 container create ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  6 04:43:37 np0005548915 systemd[1]: Started libpod-conmon-ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783.scope.
Dec  6 04:43:37 np0005548915 podman[101845]: 2025-12-06 09:43:37.862344897 +0000 UTC m=+0.034673822 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:43:37 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f663061767eb1bc4c0fc0d186ffbb53828a1861da9cbcfee94d561f0db2371c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f663061767eb1bc4c0fc0d186ffbb53828a1861da9cbcfee94d561f0db2371c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f663061767eb1bc4c0fc0d186ffbb53828a1861da9cbcfee94d561f0db2371c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f663061767eb1bc4c0fc0d186ffbb53828a1861da9cbcfee94d561f0db2371c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f663061767eb1bc4c0fc0d186ffbb53828a1861da9cbcfee94d561f0db2371c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:37 np0005548915 podman[101845]: 2025-12-06 09:43:37.986358064 +0000 UTC m=+0.158686989 container init ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  6 04:43:37 np0005548915 podman[101845]: 2025-12-06 09:43:37.997027552 +0000 UTC m=+0.169356467 container start ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:43:38 np0005548915 podman[101845]: 2025-12-06 09:43:38.000879003 +0000 UTC m=+0.173207968 container attach ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_borg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:43:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:38 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003ec0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:43:38 np0005548915 python3.9[101914]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:43:38 np0005548915 priceless_borg[101909]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:43:38 np0005548915 priceless_borg[101909]: --> All data devices are unavailable
Dec  6 04:43:38 np0005548915 systemd[1]: libpod-ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783.scope: Deactivated successfully.
Dec  6 04:43:38 np0005548915 podman[101845]: 2025-12-06 09:43:38.388442243 +0000 UTC m=+0.560771178 container died ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_borg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  6 04:43:38 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f663061767eb1bc4c0fc0d186ffbb53828a1861da9cbcfee94d561f0db2371c4-merged.mount: Deactivated successfully.
Dec  6 04:43:38 np0005548915 podman[101845]: 2025-12-06 09:43:38.442337809 +0000 UTC m=+0.614666714 container remove ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_borg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  6 04:43:38 np0005548915 systemd[1]: libpod-conmon-ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783.scope: Deactivated successfully.
Dec  6 04:43:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:38 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:38 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec  6 04:43:38 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec  6 04:43:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec  6 04:43:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec  6 04:43:38 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec  6 04:43:38 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 99 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=98/99 n=6 ec=58/45 lis/c=96/74 les/c/f=97/75/0 sis=98) [1] r=0 lpr=98 pi=[74,98)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:38 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 99 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=98/99 n=6 ec=58/45 lis/c=95/65 les/c/f=96/66/0 sis=98) [1] r=0 lpr=98 pi=[65,98)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:43:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:43:38 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 99 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=98/99 n=5 ec=58/45 lis/c=96/74 les/c/f=97/75/0 sis=98) [1] r=0 lpr=98 pi=[74,98)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:39 np0005548915 podman[102119]: 2025-12-06 09:43:39.049203946 +0000 UTC m=+0.042998732 container create 7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  6 04:43:39 np0005548915 systemd[1]: Started libpod-conmon-7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5.scope.
Dec  6 04:43:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:43:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:39.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:43:39 np0005548915 podman[102119]: 2025-12-06 09:43:39.033056189 +0000 UTC m=+0.026850995 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:43:39 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:39 np0005548915 podman[102119]: 2025-12-06 09:43:39.167971442 +0000 UTC m=+0.161766278 container init 7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 04:43:39 np0005548915 podman[102119]: 2025-12-06 09:43:39.174533361 +0000 UTC m=+0.168328697 container start 7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_heisenberg, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:43:39 np0005548915 podman[102119]: 2025-12-06 09:43:39.178375182 +0000 UTC m=+0.172170008 container attach 7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:43:39 np0005548915 vigorous_heisenberg[102172]: 167 167
Dec  6 04:43:39 np0005548915 systemd[1]: libpod-7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5.scope: Deactivated successfully.
Dec  6 04:43:39 np0005548915 podman[102119]: 2025-12-06 09:43:39.182589614 +0000 UTC m=+0.176384410 container died 7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:43:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:39 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:39 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c0204f9b44ea9921a3499e2e127016faeaebe6dc7340aa36dea2588e2370cab5-merged.mount: Deactivated successfully.
Dec  6 04:43:39 np0005548915 podman[102119]: 2025-12-06 09:43:39.232849864 +0000 UTC m=+0.226644650 container remove 7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_heisenberg, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  6 04:43:39 np0005548915 systemd[1]: libpod-conmon-7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5.scope: Deactivated successfully.
Dec  6 04:43:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:43:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:39.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:43:39 np0005548915 python3.9[102219]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:43:39 np0005548915 podman[102227]: 2025-12-06 09:43:39.448691061 +0000 UTC m=+0.073819532 container create f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pascal, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:43:39 np0005548915 systemd[1]: Started libpod-conmon-f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b.scope.
Dec  6 04:43:39 np0005548915 podman[102227]: 2025-12-06 09:43:39.420512958 +0000 UTC m=+0.045641449 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:43:39 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:39 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71804d84db8b3a28b833fc5e4be69f8e7aebd794d7ab060f109a7a640fdf1c4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:39 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71804d84db8b3a28b833fc5e4be69f8e7aebd794d7ab060f109a7a640fdf1c4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:39 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71804d84db8b3a28b833fc5e4be69f8e7aebd794d7ab060f109a7a640fdf1c4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:39 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71804d84db8b3a28b833fc5e4be69f8e7aebd794d7ab060f109a7a640fdf1c4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:39 np0005548915 podman[102227]: 2025-12-06 09:43:39.538885152 +0000 UTC m=+0.164013643 container init f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:43:39 np0005548915 podman[102227]: 2025-12-06 09:43:39.551654441 +0000 UTC m=+0.176782912 container start f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 04:43:39 np0005548915 podman[102227]: 2025-12-06 09:43:39.554246866 +0000 UTC m=+0.179375337 container attach f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pascal, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:43:39 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec  6 04:43:39 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec  6 04:43:39 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v18: 337 pgs: 1 peering, 3 active+remapped, 333 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 34 B/s, 2 objects/s recovering
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]: {
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:    "1": [
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:        {
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:            "devices": [
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:                "/dev/loop3"
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:            ],
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:            "lv_name": "ceph_lv0",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:            "lv_size": "21470642176",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:            "name": "ceph_lv0",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:            "tags": {
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:                "ceph.cluster_name": "ceph",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:                "ceph.crush_device_class": "",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:                "ceph.encrypted": "0",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:                "ceph.osd_id": "1",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:                "ceph.type": "block",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:                "ceph.vdo": "0",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:                "ceph.with_tpm": "0"
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:            },
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:            "type": "block",
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:            "vg_name": "ceph_vg0"
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:        }
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]:    ]
Dec  6 04:43:39 np0005548915 quirky_pascal[102257]: }
Dec  6 04:43:39 np0005548915 systemd[1]: libpod-f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b.scope: Deactivated successfully.
Dec  6 04:43:39 np0005548915 podman[102227]: 2025-12-06 09:43:39.852760237 +0000 UTC m=+0.477888708 container died f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  6 04:43:39 np0005548915 systemd[1]: var-lib-containers-storage-overlay-71804d84db8b3a28b833fc5e4be69f8e7aebd794d7ab060f109a7a640fdf1c4a-merged.mount: Deactivated successfully.
Dec  6 04:43:39 np0005548915 podman[102227]: 2025-12-06 09:43:39.902650647 +0000 UTC m=+0.527779118 container remove f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 04:43:39 np0005548915 systemd[1]: libpod-conmon-f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b.scope: Deactivated successfully.
Dec  6 04:43:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:40 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224001040 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:40 np0005548915 podman[102507]: 2025-12-06 09:43:40.571068219 +0000 UTC m=+0.063907563 container create e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:43:40 np0005548915 python3.9[102489]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:43:40 np0005548915 systemd[1]: Started libpod-conmon-e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156.scope.
Dec  6 04:43:40 np0005548915 podman[102507]: 2025-12-06 09:43:40.540587501 +0000 UTC m=+0.033426905 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:43:40 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:40 np0005548915 podman[102507]: 2025-12-06 09:43:40.664717642 +0000 UTC m=+0.157557006 container init e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ramanujan, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:43:40 np0005548915 podman[102507]: 2025-12-06 09:43:40.673048762 +0000 UTC m=+0.165888096 container start e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ramanujan, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  6 04:43:40 np0005548915 podman[102507]: 2025-12-06 09:43:40.676614545 +0000 UTC m=+0.169453919 container attach e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ramanujan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 04:43:40 np0005548915 systemd[1]: libpod-e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156.scope: Deactivated successfully.
Dec  6 04:43:40 np0005548915 epic_ramanujan[102526]: 167 167
Dec  6 04:43:40 np0005548915 conmon[102526]: conmon e7dc12dcc5e4a2d6714a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156.scope/container/memory.events
Dec  6 04:43:40 np0005548915 podman[102507]: 2025-12-06 09:43:40.681082524 +0000 UTC m=+0.173921868 container died e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:43:40 np0005548915 systemd[1]: var-lib-containers-storage-overlay-33c835eb5d49df1f217685264f2d3ce3297695a0f3af6ef8b56ee8cef0429021-merged.mount: Deactivated successfully.
Dec  6 04:43:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:40 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003ee0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:40 np0005548915 podman[102507]: 2025-12-06 09:43:40.729692306 +0000 UTC m=+0.222531670 container remove e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  6 04:43:40 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Dec  6 04:43:40 np0005548915 systemd[1]: libpod-conmon-e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156.scope: Deactivated successfully.
Dec  6 04:43:40 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Dec  6 04:43:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:43:40] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Dec  6 04:43:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:43:40] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Dec  6 04:43:40 np0005548915 podman[102573]: 2025-12-06 09:43:40.963387838 +0000 UTC m=+0.076635752 container create 81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Dec  6 04:43:41 np0005548915 systemd[1]: Started libpod-conmon-81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53.scope.
Dec  6 04:43:41 np0005548915 podman[102573]: 2025-12-06 09:43:40.929892312 +0000 UTC m=+0.043140306 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:43:41 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631b47ad139d32564852a5b24cff0bcd65a7f4a27e703775e9287a1c93dc79db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631b47ad139d32564852a5b24cff0bcd65a7f4a27e703775e9287a1c93dc79db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631b47ad139d32564852a5b24cff0bcd65a7f4a27e703775e9287a1c93dc79db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631b47ad139d32564852a5b24cff0bcd65a7f4a27e703775e9287a1c93dc79db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:41 np0005548915 podman[102573]: 2025-12-06 09:43:41.059238783 +0000 UTC m=+0.172486717 container init 81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shamir, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:43:41 np0005548915 podman[102573]: 2025-12-06 09:43:41.067578024 +0000 UTC m=+0.180825938 container start 81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shamir, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:43:41 np0005548915 podman[102573]: 2025-12-06 09:43:41.070809237 +0000 UTC m=+0.184057151 container attach 81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shamir, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:43:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:43:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:41.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:43:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:41 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:43:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:41.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:43:41 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec  6 04:43:41 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec  6 04:43:41 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v19: 337 pgs: 1 peering, 3 active+remapped, 333 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 1 objects/s recovering
Dec  6 04:43:41 np0005548915 python3.9[102771]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:43:41 np0005548915 lvm[102794]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:43:41 np0005548915 lvm[102794]: VG ceph_vg0 finished
Dec  6 04:43:41 np0005548915 jovial_shamir[102591]: {}
Dec  6 04:43:41 np0005548915 systemd[1]: libpod-81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53.scope: Deactivated successfully.
Dec  6 04:43:41 np0005548915 systemd[1]: libpod-81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53.scope: Consumed 1.319s CPU time.
Dec  6 04:43:41 np0005548915 podman[102573]: 2025-12-06 09:43:41.905346983 +0000 UTC m=+1.018594897 container died 81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shamir, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  6 04:43:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:42 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:42 np0005548915 systemd[1]: var-lib-containers-storage-overlay-631b47ad139d32564852a5b24cff0bcd65a7f4a27e703775e9287a1c93dc79db-merged.mount: Deactivated successfully.
Dec  6 04:43:42 np0005548915 podman[102573]: 2025-12-06 09:43:42.321860068 +0000 UTC m=+1.435107982 container remove 81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:43:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:43:42 np0005548915 systemd[1]: libpod-conmon-81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53.scope: Deactivated successfully.
Dec  6 04:43:42 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec  6 04:43:42 np0005548915 python3.9[102962]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:43:42 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec  6 04:43:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:42 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224001040 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:43:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:43:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:43.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:43:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:43 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224001040 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:43:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:43.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:43 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec  6 04:43:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec  6 04:43:43 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec  6 04:43:43 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v20: 337 pgs: 337 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:43:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Dec  6 04:43:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  6 04:43:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Dec  6 04:43:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  6 04:43:44 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:44 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003f00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec  6 04:43:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:44 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec  6 04:43:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:44 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:44 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec  6 04:43:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  6 04:43:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  6 04:43:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec  6 04:43:44 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Dec  6 04:43:44 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Dec  6 04:43:44 np0005548915 python3.9[103164]: ansible-ansible.builtin.service_facts Invoked
Dec  6 04:43:44 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec  6 04:43:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  6 04:43:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  6 04:43:44 np0005548915 network[103182]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 04:43:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  6 04:43:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  6 04:43:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:43:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:43:44 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec  6 04:43:44 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec  6 04:43:44 np0005548915 network[103183]: 'network-scripts' will be removed from distribution in near future.
Dec  6 04:43:44 np0005548915 network[103184]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 04:43:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:43:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:45.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:43:45 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 100 pg[10.1d( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=79/79 les/c/f=80/80/0 sis=100) [1] r=0 lpr=100 pi=[79,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:45 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 100 pg[10.d( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=79/79 les/c/f=80/80/0 sis=100) [1] r=0 lpr=100 pi=[79,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:45 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 100 pg[6.e( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=76/76 les/c/f=77/77/0 sis=100) [1] r=0 lpr=100 pi=[76,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:45 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224001040 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:45.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:45 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:45 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  6 04:43:45 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  6 04:43:45 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:45 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  6 04:43:45 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  6 04:43:45 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  6 04:43:45 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec  6 04:43:45 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec  6 04:43:45 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v22: 337 pgs: 337 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:43:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec  6 04:43:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:46 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248001ff0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Dec  6 04:43:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  6 04:43:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec  6 04:43:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  6 04:43:46 np0005548915 podman[103275]: 2025-12-06 09:43:46.146158006 +0000 UTC m=+0.025059374 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:43:46 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec  6 04:43:46 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec  6 04:43:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094346 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:43:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:46 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:46 np0005548915 podman[103275]: 2025-12-06 09:43:46.913393549 +0000 UTC m=+0.792294937 container create 84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14 (image=quay.io/ceph/ceph:v19, name=agitated_edison, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:43:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec  6 04:43:47 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec  6 04:43:47 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 101 pg[10.1d( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=79/79 les/c/f=80/80/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[79,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:47 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 101 pg[10.1d( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=79/79 les/c/f=80/80/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[79,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:47 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 101 pg[10.d( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=79/79 les/c/f=80/80/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[79,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:47 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 101 pg[10.d( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=79/79 les/c/f=80/80/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[79,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:47 np0005548915 ceph-mon[74327]: Reconfiguring mon.compute-0 (monmap changed)...
Dec  6 04:43:47 np0005548915 ceph-mon[74327]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  6 04:43:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  6 04:43:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  6 04:43:47 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 101 pg[6.e( v 50'39 lc 48'19 (0'0,50'39] local-lis/les=100/101 n=1 ec=54/21 lis/c=76/76 les/c/f=77/77/0 sis=100) [1] r=0 lpr=100 pi=[76,100)/1 crt=50'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:47 np0005548915 systemd[1]: Started libpod-conmon-84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14.scope.
Dec  6 04:43:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:43:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:47.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:43:47 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:47 np0005548915 podman[103275]: 2025-12-06 09:43:47.180162986 +0000 UTC m=+1.059064354 container init 84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14 (image=quay.io/ceph/ceph:v19, name=agitated_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 04:43:47 np0005548915 podman[103275]: 2025-12-06 09:43:47.191985657 +0000 UTC m=+1.070886995 container start 84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14 (image=quay.io/ceph/ceph:v19, name=agitated_edison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:43:47 np0005548915 agitated_edison[103342]: 167 167
Dec  6 04:43:47 np0005548915 systemd[1]: libpod-84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14.scope: Deactivated successfully.
Dec  6 04:43:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:47 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:47.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:47 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec  6 04:43:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v24: 337 pgs: 2 remapped+peering, 335 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:43:47 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec  6 04:43:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec  6 04:43:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:48 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:48 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:48 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec  6 04:43:48 np0005548915 podman[103275]: 2025-12-06 09:43:48.803057656 +0000 UTC m=+2.681959024 container attach 84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14 (image=quay.io/ceph/ceph:v19, name=agitated_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:43:48 np0005548915 podman[103275]: 2025-12-06 09:43:48.804697664 +0000 UTC m=+2.683599052 container died 84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14 (image=quay.io/ceph/ceph:v19, name=agitated_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 04:43:48 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec  6 04:43:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:43:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:49.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:43:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:49 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:43:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:49.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:43:49 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec  6 04:43:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v25: 337 pgs: 2 remapped+peering, 335 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:43:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:50 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:50 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  6 04:43:50 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  6 04:43:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec  6 04:43:50 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec  6 04:43:50 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec  6 04:43:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:50 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:50 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec  6 04:43:50 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1db180d0f7ace9ce4bf8917b1ef307d85eb931e1c865b5599773133fc94b2561-merged.mount: Deactivated successfully.
Dec  6 04:43:50 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec  6 04:43:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:43:50] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Dec  6 04:43:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:43:50] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Dec  6 04:43:50 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 102 pg[6.f( v 50'39 (0'0,50'39] local-lis/les=64/65 n=3 ec=54/21 lis/c=64/64 les/c/f=65/65/0 sis=102 pruub=13.283234596s) [0] r=-1 lpr=102 pi=[64,102)/1 crt=50'39 mlcod 50'39 active pruub 266.538940430s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:50 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 102 pg[6.f( v 50'39 (0'0,50'39] local-lis/les=64/65 n=3 ec=54/21 lis/c=64/64 les/c/f=65/65/0 sis=102 pruub=13.283174515s) [0] r=-1 lpr=102 pi=[64,102)/1 crt=50'39 mlcod 0'0 unknown NOTIFY pruub 266.538940430s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:50 np0005548915 podman[103275]: 2025-12-06 09:43:50.962720359 +0000 UTC m=+4.841621697 container remove 84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14 (image=quay.io/ceph/ceph:v19, name=agitated_edison, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:43:50 np0005548915 systemd[1]: libpod-conmon-84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14.scope: Deactivated successfully.
Dec  6 04:43:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:51.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:43:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:51 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:43:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:51.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:43:51 np0005548915 python3.9[103548]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:43:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:43:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:51 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.qhdjwa (monmap changed)...
Dec  6 04:43:51 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.qhdjwa (monmap changed)...
Dec  6 04:43:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.qhdjwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  6 04:43:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.qhdjwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  6 04:43:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  6 04:43:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  6 04:43:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:43:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:43:51 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.qhdjwa on compute-0
Dec  6 04:43:51 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.qhdjwa on compute-0
Dec  6 04:43:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v27: 337 pgs: 2 remapped+peering, 335 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:43:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:52 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:52 np0005548915 podman[103767]: 2025-12-06 09:43:52.121015864 +0000 UTC m=+0.029713108 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  6 04:43:52 np0005548915 python3.9[103749]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:43:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec  6 04:43:52 np0005548915 podman[103767]: 2025-12-06 09:43:52.609448225 +0000 UTC m=+0.518145399 container create 3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587 (image=quay.io/ceph/ceph:v19, name=intelligent_swartz, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:43:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:52 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:52 np0005548915 systemd[1]: Started libpod-conmon-3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587.scope.
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec  6 04:43:53 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:43:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:53.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.qhdjwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  6 04:43:53 np0005548915 podman[103767]: 2025-12-06 09:43:53.195884513 +0000 UTC m=+1.104581757 container init 3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587 (image=quay.io/ceph/ceph:v19, name=intelligent_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec  6 04:43:53 np0005548915 podman[103767]: 2025-12-06 09:43:53.211675109 +0000 UTC m=+1.120372293 container start 3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587 (image=quay.io/ceph/ceph:v19, name=intelligent_swartz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  6 04:43:53 np0005548915 intelligent_swartz[103812]: 167 167
Dec  6 04:43:53 np0005548915 systemd[1]: libpod-3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587.scope: Deactivated successfully.
Dec  6 04:43:53 np0005548915 podman[103767]: 2025-12-06 09:43:53.218045963 +0000 UTC m=+1.126743137 container attach 3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587 (image=quay.io/ceph/ceph:v19, name=intelligent_swartz, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 04:43:53 np0005548915 podman[103767]: 2025-12-06 09:43:53.218838805 +0000 UTC m=+1.127535979 container died 3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587 (image=quay.io/ceph/ceph:v19, name=intelligent_swartz, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Dec  6 04:43:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:53 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000057s ======
Dec  6 04:43:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:53.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Dec  6 04:43:53 np0005548915 systemd[1]: var-lib-containers-storage-overlay-821a6ef3bb6356adb22830808dc4926ac091537afd201d4a06090c79b5ef9ed5-merged.mount: Deactivated successfully.
Dec  6 04:43:53 np0005548915 podman[103767]: 2025-12-06 09:43:53.354565161 +0000 UTC m=+1.263262305 container remove 3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587 (image=quay.io/ceph/ceph:v19, name=intelligent_swartz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:43:53 np0005548915 systemd[1]: libpod-conmon-3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587.scope: Deactivated successfully.
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:53 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Dec  6 04:43:53 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:43:53 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Dec  6 04:43:53 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Dec  6 04:43:53 np0005548915 python3.9[103955]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:43:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v29: 337 pgs: 1 active+recovering+remapped, 1 active+remapped, 1 peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5/223 objects misplaced (2.242%)
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec  6 04:43:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 104 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=8 ec=58/45 lis/c=101/79 les/c/f=102/80/0 sis=104) [1] r=0 lpr=104 pi=[79,104)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 104 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=8 ec=58/45 lis/c=101/79 les/c/f=102/80/0 sis=104) [1] r=0 lpr=104 pi=[79,104)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 104 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=101/79 les/c/f=102/80/0 sis=104) [1] r=0 lpr=104 pi=[79,104)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 104 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=101/79 les/c/f=102/80/0 sis=104) [1] r=0 lpr=104 pi=[79,104)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:43:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:43:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:43:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:43:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:43:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:43:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:43:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:43:53 np0005548915 podman[104027]: 2025-12-06 09:43:53.98022347 +0000 UTC m=+0.056065768 container create 4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  6 04:43:54 np0005548915 systemd[1]: Started libpod-conmon-4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003.scope.
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: Reconfiguring mgr.compute-0.qhdjwa (monmap changed)...
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: Reconfiguring daemon mgr.compute-0.qhdjwa on compute-0
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  6 04:43:54 np0005548915 podman[104027]: 2025-12-06 09:43:53.951913624 +0000 UTC m=+0.027756012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:43:54 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:54 np0005548915 podman[104027]: 2025-12-06 09:43:54.070027901 +0000 UTC m=+0.145870239 container init 4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:43:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:54 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:54 np0005548915 podman[104027]: 2025-12-06 09:43:54.081509502 +0000 UTC m=+0.157351810 container start 4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_jackson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:43:54 np0005548915 podman[104027]: 2025-12-06 09:43:54.085388165 +0000 UTC m=+0.161230513 container attach 4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_jackson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  6 04:43:54 np0005548915 upbeat_jackson[104044]: 167 167
Dec  6 04:43:54 np0005548915 podman[104027]: 2025-12-06 09:43:54.087731082 +0000 UTC m=+0.163573380 container died 4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_jackson, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 04:43:54 np0005548915 systemd[1]: libpod-4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003.scope: Deactivated successfully.
Dec  6 04:43:54 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d40f1315612256cf2011f7e2e3418ec77243a61d2b4f1bde08cea13ddfbff4ce-merged.mount: Deactivated successfully.
Dec  6 04:43:54 np0005548915 podman[104027]: 2025-12-06 09:43:54.142266425 +0000 UTC m=+0.218108763 container remove 4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  6 04:43:54 np0005548915 systemd[1]: libpod-conmon-4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003.scope: Deactivated successfully.
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:54 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Dec  6 04:43:54 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:43:54 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Dec  6 04:43:54 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Dec  6 04:43:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:54 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:54 np0005548915 podman[104226]: 2025-12-06 09:43:54.808443374 +0000 UTC m=+0.046853973 container create 0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_antonelli, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec  6 04:43:54 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec  6 04:43:54 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 105 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=104/105 n=8 ec=58/45 lis/c=101/79 les/c/f=102/80/0 sis=104) [1] r=0 lpr=104 pi=[79,104)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:54 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 105 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=104/105 n=5 ec=58/45 lis/c=101/79 les/c/f=102/80/0 sis=104) [1] r=0 lpr=104 pi=[79,104)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:54 np0005548915 systemd[1]: Started libpod-conmon-0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3.scope.
Dec  6 04:43:54 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:54 np0005548915 podman[104226]: 2025-12-06 09:43:54.786537642 +0000 UTC m=+0.024948251 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:43:54 np0005548915 podman[104226]: 2025-12-06 09:43:54.892780747 +0000 UTC m=+0.131191366 container init 0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_antonelli, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  6 04:43:54 np0005548915 podman[104226]: 2025-12-06 09:43:54.899174441 +0000 UTC m=+0.137585030 container start 0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_antonelli, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 04:43:54 np0005548915 podman[104226]: 2025-12-06 09:43:54.902422985 +0000 UTC m=+0.140833574 container attach 0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_antonelli, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:43:54 np0005548915 stupefied_antonelli[104267]: 167 167
Dec  6 04:43:54 np0005548915 systemd[1]: libpod-0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3.scope: Deactivated successfully.
Dec  6 04:43:54 np0005548915 podman[104226]: 2025-12-06 09:43:54.904834134 +0000 UTC m=+0.143244723 container died 0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec  6 04:43:54 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ab48666baafdb3d1f259bc11dbdc4bd1a7e4f9b5c8374efaaa3afa503246fbbe-merged.mount: Deactivated successfully.
Dec  6 04:43:54 np0005548915 podman[104226]: 2025-12-06 09:43:54.947456504 +0000 UTC m=+0.185867093 container remove 0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_antonelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  6 04:43:54 np0005548915 systemd[1]: libpod-conmon-0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3.scope: Deactivated successfully.
Dec  6 04:43:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:43:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:43:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:55.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:55 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec  6 04:43:55 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec  6 04:43:55 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec  6 04:43:55 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec  6 04:43:55 np0005548915 ceph-mon[74327]: Reconfiguring crash.compute-0 (monmap changed)...
Dec  6 04:43:55 np0005548915 ceph-mon[74327]: Reconfiguring daemon crash.compute-0 on compute-0
Dec  6 04:43:55 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:55 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:55 np0005548915 ceph-mon[74327]: Reconfiguring osd.1 (monmap changed)...
Dec  6 04:43:55 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  6 04:43:55 np0005548915 ceph-mon[74327]: Reconfiguring daemon osd.1 on compute-0
Dec  6 04:43:55 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:55 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:55 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:43:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:55.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:43:55 np0005548915 python3.9[104311]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:43:55 np0005548915 podman[104397]: 2025-12-06 09:43:55.602268235 +0000 UTC m=+0.040850139 volume create 29d79d266cc39bf95c8993cfd6612f2cad172b127cec8360714d431d53d0e93e
Dec  6 04:43:55 np0005548915 podman[104397]: 2025-12-06 09:43:55.609559506 +0000 UTC m=+0.048141410 container create d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_saha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:55 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:43:55 np0005548915 systemd[1]: Started libpod-conmon-d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27.scope.
Dec  6 04:43:55 np0005548915 podman[104397]: 2025-12-06 09:43:55.585077039 +0000 UTC m=+0.023658963 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  6 04:43:55 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbe834a8e4dd3f0cca394b1bce23940316eea2237a86389cf744fd58ed9c7647/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:55 np0005548915 podman[104397]: 2025-12-06 09:43:55.719107556 +0000 UTC m=+0.157689500 container init d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_saha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:55 np0005548915 podman[104397]: 2025-12-06 09:43:55.72828835 +0000 UTC m=+0.166870284 container start d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_saha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:55 np0005548915 strange_saha[104412]: 65534 65534
Dec  6 04:43:55 np0005548915 systemd[1]: libpod-d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27.scope: Deactivated successfully.
Dec  6 04:43:55 np0005548915 podman[104397]: 2025-12-06 09:43:55.731971706 +0000 UTC m=+0.170553640 container attach d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_saha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:55 np0005548915 podman[104397]: 2025-12-06 09:43:55.733611184 +0000 UTC m=+0.172193158 container died d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_saha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:55 np0005548915 systemd[1]: var-lib-containers-storage-overlay-fbe834a8e4dd3f0cca394b1bce23940316eea2237a86389cf744fd58ed9c7647-merged.mount: Deactivated successfully.
Dec  6 04:43:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v32: 337 pgs: 1 active+recovering+remapped, 1 active+remapped, 1 peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5/223 objects misplaced (2.242%)
Dec  6 04:43:55 np0005548915 podman[104397]: 2025-12-06 09:43:55.796957132 +0000 UTC m=+0.235539046 container remove d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_saha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:55 np0005548915 systemd[1]: libpod-conmon-d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27.scope: Deactivated successfully.
Dec  6 04:43:55 np0005548915 podman[104397]: 2025-12-06 09:43:55.801000068 +0000 UTC m=+0.239581982 volume remove 29d79d266cc39bf95c8993cfd6612f2cad172b127cec8360714d431d53d0e93e
Dec  6 04:43:55 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec  6 04:43:55 np0005548915 podman[104429]: 2025-12-06 09:43:55.883324173 +0000 UTC m=+0.052868896 volume create 4fa083cffff371ad2291549d3b09dafbd3a482881401c5129fd56cea005ed736
Dec  6 04:43:55 np0005548915 podman[104429]: 2025-12-06 09:43:55.890390847 +0000 UTC m=+0.059935570 container create c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elastic_nobel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:55 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec  6 04:43:55 np0005548915 systemd[1]: Started libpod-conmon-c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050.scope.
Dec  6 04:43:55 np0005548915 podman[104429]: 2025-12-06 09:43:55.862865653 +0000 UTC m=+0.032410416 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  6 04:43:55 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e583021cfab7258ce36587c0be6f879760fcc5ed97c287aa838965959860b2b/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:55 np0005548915 podman[104429]: 2025-12-06 09:43:55.985515921 +0000 UTC m=+0.155060694 container init c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elastic_nobel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:55 np0005548915 podman[104429]: 2025-12-06 09:43:55.993758659 +0000 UTC m=+0.163303392 container start c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elastic_nobel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:55 np0005548915 elastic_nobel[104458]: 65534 65534
Dec  6 04:43:55 np0005548915 systemd[1]: libpod-c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050.scope: Deactivated successfully.
Dec  6 04:43:55 np0005548915 podman[104429]: 2025-12-06 09:43:55.997647151 +0000 UTC m=+0.167191884 container attach c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elastic_nobel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:55 np0005548915 podman[104429]: 2025-12-06 09:43:55.998278579 +0000 UTC m=+0.167823322 container died c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elastic_nobel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:56 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6e583021cfab7258ce36587c0be6f879760fcc5ed97c287aa838965959860b2b-merged.mount: Deactivated successfully.
Dec  6 04:43:56 np0005548915 podman[104429]: 2025-12-06 09:43:56.034513854 +0000 UTC m=+0.204058577 container remove c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elastic_nobel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:56 np0005548915 podman[104429]: 2025-12-06 09:43:56.038174051 +0000 UTC m=+0.207718794 volume remove 4fa083cffff371ad2291549d3b09dafbd3a482881401c5129fd56cea005ed736
Dec  6 04:43:56 np0005548915 systemd[1]: libpod-conmon-c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050.scope: Deactivated successfully.
Dec  6 04:43:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:56 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:56 np0005548915 systemd[1]: Stopping Ceph alertmanager.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:43:56 np0005548915 ceph-mon[74327]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec  6 04:43:56 np0005548915 ceph-mon[74327]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec  6 04:43:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:43:56.321Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Dec  6 04:43:56 np0005548915 podman[104565]: 2025-12-06 09:43:56.331840863 +0000 UTC m=+0.060479536 container died b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:56 np0005548915 systemd[1]: var-lib-containers-storage-overlay-bb2a73ca3b14a2c20beb30faadb6ace12cd5adb72f156644e5801ee5b84b2c3c-merged.mount: Deactivated successfully.
Dec  6 04:43:56 np0005548915 podman[104565]: 2025-12-06 09:43:56.376394267 +0000 UTC m=+0.105032930 container remove b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:56 np0005548915 podman[104565]: 2025-12-06 09:43:56.380201988 +0000 UTC m=+0.108840661 volume remove cc9140d1b399a34df664d17bf3d5da457ec5a14a1279788aa2852185673a3bfd
Dec  6 04:43:56 np0005548915 bash[104565]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0
Dec  6 04:43:56 np0005548915 python3.9[104553]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:43:56 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@alertmanager.compute-0.service: Deactivated successfully.
Dec  6 04:43:56 np0005548915 systemd[1]: Stopped Ceph alertmanager.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:43:56 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@alertmanager.compute-0.service: Consumed 1.420s CPU time.
Dec  6 04:43:56 np0005548915 systemd[1]: Starting Ceph alertmanager.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:43:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:56 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:56 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec  6 04:43:56 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec  6 04:43:56 np0005548915 podman[104672]: 2025-12-06 09:43:56.893123725 +0000 UTC m=+0.068454877 volume create df96191fbc5e25dde6954322f5c80fec8b2a1ece9bff16e83ede1b379e193dc2
Dec  6 04:43:56 np0005548915 podman[104672]: 2025-12-06 09:43:56.908776796 +0000 UTC m=+0.084107918 container create b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:56 np0005548915 podman[104672]: 2025-12-06 09:43:56.869898535 +0000 UTC m=+0.045229657 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  6 04:43:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/534b49d6523b540f1172e3c7a1e9796019831d81e6906f4fcfaa0985e2a9f95c/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/534b49d6523b540f1172e3c7a1e9796019831d81e6906f4fcfaa0985e2a9f95c/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:56 np0005548915 podman[104672]: 2025-12-06 09:43:56.999201895 +0000 UTC m=+0.174533087 container init b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:57 np0005548915 podman[104672]: 2025-12-06 09:43:57.004098697 +0000 UTC m=+0.179429849 container start b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:43:57 np0005548915 bash[104672]: b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641
Dec  6 04:43:57 np0005548915 systemd[1]: Started Ceph alertmanager.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:43:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.036Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec  6 04:43:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.036Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec  6 04:43:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.048Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec  6 04:43:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.050Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec  6 04:43:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:43:57 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:43:57 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:57 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Dec  6 04:43:57 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Dec  6 04:43:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.107Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  6 04:43:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.107Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  6 04:43:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.113Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec  6 04:43:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.113Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec  6 04:43:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:57.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:57 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Dec  6 04:43:57 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Dec  6 04:43:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:57 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:57 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:57 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:43:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:57.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:43:57 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v33: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:43:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Dec  6 04:43:57 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  6 04:43:57 np0005548915 podman[104801]: 2025-12-06 09:43:57.824598886 +0000 UTC m=+0.062518494 container create e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b (image=quay.io/ceph/grafana:10.4.0, name=magical_feynman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:57 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec  6 04:43:57 np0005548915 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec  6 04:43:57 np0005548915 systemd[1]: Started libpod-conmon-e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b.scope.
Dec  6 04:43:57 np0005548915 podman[104801]: 2025-12-06 09:43:57.796754613 +0000 UTC m=+0.034674271 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  6 04:43:57 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:57 np0005548915 podman[104801]: 2025-12-06 09:43:57.941219591 +0000 UTC m=+0.179139289 container init e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b (image=quay.io/ceph/grafana:10.4.0, name=magical_feynman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:57 np0005548915 podman[104801]: 2025-12-06 09:43:57.951155008 +0000 UTC m=+0.189074626 container start e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b (image=quay.io/ceph/grafana:10.4.0, name=magical_feynman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:57 np0005548915 podman[104801]: 2025-12-06 09:43:57.954999649 +0000 UTC m=+0.192919257 container attach e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b (image=quay.io/ceph/grafana:10.4.0, name=magical_feynman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:57 np0005548915 magical_feynman[104822]: 472 0
Dec  6 04:43:57 np0005548915 systemd[1]: libpod-e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b.scope: Deactivated successfully.
Dec  6 04:43:57 np0005548915 podman[104801]: 2025-12-06 09:43:57.957657836 +0000 UTC m=+0.195577474 container died e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b (image=quay.io/ceph/grafana:10.4.0, name=magical_feynman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:57 np0005548915 systemd[1]: var-lib-containers-storage-overlay-53e3c1d3d02db252fde23f1b46ec1038f726088626234ad94d2002df85f270f0-merged.mount: Deactivated successfully.
Dec  6 04:43:57 np0005548915 podman[104801]: 2025-12-06 09:43:57.994053396 +0000 UTC m=+0.231973024 container remove e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b (image=quay.io/ceph/grafana:10.4.0, name=magical_feynman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:58 np0005548915 systemd[1]: libpod-conmon-e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b.scope: Deactivated successfully.
Dec  6 04:43:58 np0005548915 podman[104844]: 2025-12-06 09:43:58.081016594 +0000 UTC m=+0.058053366 container create 69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7 (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_moore, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:58 np0005548915 systemd[1]: Started libpod-conmon-69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7.scope.
Dec  6 04:43:58 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:43:58 np0005548915 podman[104844]: 2025-12-06 09:43:58.054252972 +0000 UTC m=+0.031289794 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  6 04:43:58 np0005548915 podman[104844]: 2025-12-06 09:43:58.160548128 +0000 UTC m=+0.137584930 container init 69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7 (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_moore, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:58 np0005548915 podman[104844]: 2025-12-06 09:43:58.169212878 +0000 UTC m=+0.146249660 container start 69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7 (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_moore, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:58 np0005548915 nostalgic_moore[104863]: 472 0
Dec  6 04:43:58 np0005548915 systemd[1]: libpod-69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7.scope: Deactivated successfully.
Dec  6 04:43:58 np0005548915 podman[104844]: 2025-12-06 09:43:58.173494322 +0000 UTC m=+0.150531124 container attach 69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7 (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_moore, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:58 np0005548915 podman[104844]: 2025-12-06 09:43:58.174148271 +0000 UTC m=+0.151185073 container died 69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7 (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_moore, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:58 np0005548915 systemd[1]: var-lib-containers-storage-overlay-40cd99596f7ba43ebf5b2afe178e68d864e1bf59405efc44c8e2b54688fb35f4-merged.mount: Deactivated successfully.
Dec  6 04:43:58 np0005548915 podman[104844]: 2025-12-06 09:43:58.224156444 +0000 UTC m=+0.201193216 container remove 69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7 (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_moore, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:58 np0005548915 systemd[1]: libpod-conmon-69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7.scope: Deactivated successfully.
Dec  6 04:43:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec  6 04:43:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  6 04:43:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec  6 04:43:58 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec  6 04:43:58 np0005548915 ceph-mon[74327]: Reconfiguring grafana.compute-0 (dependencies changed)...
Dec  6 04:43:58 np0005548915 ceph-mon[74327]: Reconfiguring daemon grafana.compute-0 on compute-0
Dec  6 04:43:58 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  6 04:43:58 np0005548915 systemd[1]: Stopping Ceph grafana.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:43:58 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 106 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=86/87 n=7 ec=58/45 lis/c=86/86 les/c/f=87/87/0 sis=106 pruub=13.671803474s) [2] r=-1 lpr=106 pi=[86,106)/1 crt=51'1027 mlcod 0'0 active pruub 274.329833984s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:58 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 106 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=86/87 n=7 ec=58/45 lis/c=86/86 les/c/f=87/87/0 sis=106 pruub=13.671212196s) [2] r=-1 lpr=106 pi=[86,106)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 274.329833984s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:58 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 106 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=85/85 les/c/f=86/86/0 sis=106 pruub=13.179901123s) [2] r=-1 lpr=106 pi=[85,106)/1 crt=51'1027 mlcod 0'0 active pruub 273.838836670s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:58 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 106 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=85/85 les/c/f=86/86/0 sis=106 pruub=13.179830551s) [2] r=-1 lpr=106 pi=[85,106)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 273.838836670s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=server t=2025-12-06T09:43:58.516369113Z level=info msg="Shutdown started" reason="System signal: terminated"
Dec  6 04:43:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ticker t=2025-12-06T09:43:58.516541678Z level=info msg=stopped last_tick=2025-12-06T09:43:50Z
Dec  6 04:43:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=tracing t=2025-12-06T09:43:58.516624821Z level=info msg="Closing tracing"
Dec  6 04:43:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=grafana-apiserver t=2025-12-06T09:43:58.516792565Z level=info msg="StorageObjectCountTracker pruner is exiting"
Dec  6 04:43:58 np0005548915 podman[104923]: 2025-12-06 09:43:58.53702808 +0000 UTC m=+0.058652263 container died cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:58 np0005548915 systemd[1]: var-lib-containers-storage-overlay-62646ffda72f68277eee1ddb53fbcad0d452c3540e217585dbd2633e8332ac48-merged.mount: Deactivated successfully.
Dec  6 04:43:58 np0005548915 podman[104923]: 2025-12-06 09:43:58.579525706 +0000 UTC m=+0.101149879 container remove cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:58 np0005548915 bash[104923]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0
Dec  6 04:43:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:43:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:43:58 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@grafana.compute-0.service: Deactivated successfully.
Dec  6 04:43:58 np0005548915 systemd[1]: Stopped Ceph grafana.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:43:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:58 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@grafana.compute-0.service: Consumed 4.949s CPU time.
Dec  6 04:43:58 np0005548915 systemd[1]: Starting Ceph grafana.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:43:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:43:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec  6 04:43:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec  6 04:43:58 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec  6 04:43:58 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 107 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=85/85 les/c/f=86/86/0 sis=107) [2]/[1] r=0 lpr=107 pi=[85,107)/1 crt=51'1027 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:58 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 107 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=86/87 n=7 ec=58/45 lis/c=86/86 les/c/f=87/87/0 sis=107) [2]/[1] r=0 lpr=107 pi=[86,107)/1 crt=51'1027 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:58 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 107 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=86/87 n=7 ec=58/45 lis/c=86/86 les/c/f=87/87/0 sis=107) [2]/[1] r=0 lpr=107 pi=[86,107)/1 crt=51'1027 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:58 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 107 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=85/85 les/c/f=86/86/0 sis=107) [2]/[1] r=0 lpr=107 pi=[85,107)/1 crt=51'1027 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:43:59 np0005548915 podman[105031]: 2025-12-06 09:43:59.023603527 +0000 UTC m=+0.065762488 container create fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:59.051Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000611495s
Dec  6 04:43:59 np0005548915 podman[105031]: 2025-12-06 09:43:58.989273876 +0000 UTC m=+0.031432927 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  6 04:43:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5c6cd98788ce1db69298fdd871fee591f9145ae1f808ebeb8ae8a42a3e31ed/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5c6cd98788ce1db69298fdd871fee591f9145ae1f808ebeb8ae8a42a3e31ed/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5c6cd98788ce1db69298fdd871fee591f9145ae1f808ebeb8ae8a42a3e31ed/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5c6cd98788ce1db69298fdd871fee591f9145ae1f808ebeb8ae8a42a3e31ed/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5c6cd98788ce1db69298fdd871fee591f9145ae1f808ebeb8ae8a42a3e31ed/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec  6 04:43:59 np0005548915 podman[105031]: 2025-12-06 09:43:59.101122563 +0000 UTC m=+0.143281544 container init fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:59 np0005548915 podman[105031]: 2025-12-06 09:43:59.113712497 +0000 UTC m=+0.155871458 container start fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:43:59 np0005548915 bash[105031]: fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3
Dec  6 04:43:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:59 np0005548915 systemd[1]: Started Ceph grafana.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:43:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:59.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:59 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Dec  6 04:43:59 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:43:59 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Dec  6 04:43:59 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:59 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  6 04:43:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:43:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:43:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:59.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.336511944Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-06T09:43:59Z
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337129252Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337158152Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337168253Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337176893Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337185533Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337193863Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337202414Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337211314Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337222294Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337231344Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337239715Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337250485Z level=info msg=Target target=[all]
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337267965Z level=info msg="Path Home" path=/usr/share/grafana
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337276326Z level=info msg="Path Data" path=/var/lib/grafana
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337283996Z level=info msg="Path Logs" path=/var/log/grafana
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337291616Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337299796Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337308487Z level=info msg="App mode production"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=sqlstore t=2025-12-06T09:43:59.337935666Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=sqlstore t=2025-12-06T09:43:59.337979547Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=migrator t=2025-12-06T09:43:59.339338635Z level=info msg="Starting DB migrations"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=migrator t=2025-12-06T09:43:59.374220432Z level=info msg="migrations completed" performed=0 skipped=547 duration=903.967µs
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=sqlstore t=2025-12-06T09:43:59.375870079Z level=info msg="Created default organization"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=secrets t=2025-12-06T09:43:59.377284231Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=plugin.store t=2025-12-06T09:43:59.403575419Z level=info msg="Loading plugins..."
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=local.finder t=2025-12-06T09:43:59.491227327Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=plugin.store t=2025-12-06T09:43:59.491295889Z level=info msg="Plugins loaded" count=55 duration=87.72101ms
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=query_data t=2025-12-06T09:43:59.497184279Z level=info msg="Query Service initialization"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=live.push_http t=2025-12-06T09:43:59.502282636Z level=info msg="Live Push Gateway initialization"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=ngalert.migration t=2025-12-06T09:43:59.506386245Z level=info msg=Starting
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=ngalert.state.manager t=2025-12-06T09:43:59.524453596Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=infra.usagestats.collector t=2025-12-06T09:43:59.527835483Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=provisioning.datasources t=2025-12-06T09:43:59.532945241Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=provisioning.alerting t=2025-12-06T09:43:59.562793142Z level=info msg="starting to provision alerting"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=provisioning.alerting t=2025-12-06T09:43:59.562825053Z level=info msg="finished to provision alerting"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=ngalert.state.manager t=2025-12-06T09:43:59.562949147Z level=info msg="Warming state cache for startup"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=ngalert.state.manager t=2025-12-06T09:43:59.563561284Z level=info msg="State cache has been initialized" states=0 duration=611.788µs
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=ngalert.multiorg.alertmanager t=2025-12-06T09:43:59.563654017Z level=info msg="Starting MultiOrg Alertmanager"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=ngalert.scheduler t=2025-12-06T09:43:59.563698708Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=ticker t=2025-12-06T09:43:59.56377418Z level=info msg=starting first_tick=2025-12-06T09:44:00Z
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafanaStorageLogger t=2025-12-06T09:43:59.56688829Z level=info msg="Storage starting"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=http.server t=2025-12-06T09:43:59.571645517Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=http.server t=2025-12-06T09:43:59.572370578Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=provisioning.dashboard t=2025-12-06T09:43:59.613143035Z level=info msg="starting to provision dashboards"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=plugins.update.checker t=2025-12-06T09:43:59.625985605Z level=info msg="Update check succeeded" duration=62.040069ms
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=provisioning.dashboard t=2025-12-06T09:43:59.646783395Z level=info msg="finished to provision dashboards"
Dec  6 04:43:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafana.update.checker t=2025-12-06T09:43:59.66465165Z level=info msg="Update check succeeded" duration=101.083516ms
Dec  6 04:43:59 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v36: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5 B/s, 0 objects/s recovering
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec  6 04:43:59 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 108 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=2 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=108 pruub=13.148490906s) [2] r=-1 lpr=108 pi=[58,108)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 275.312438965s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:43:59 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 108 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=2 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=108 pruub=13.148456573s) [2] r=-1 lpr=108 pi=[58,108)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 275.312438965s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:43:59 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 108 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=107/108 n=5 ec=58/45 lis/c=85/85 les/c/f=86/86/0 sis=107) [2]/[1] async=[2] r=0 lpr=107 pi=[85,107)/1 crt=51'1027 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:59 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 108 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=107/108 n=7 ec=58/45 lis/c=86/86 les/c/f=87/87/0 sis=107) [2]/[1] async=[2] r=0 lpr=107 pi=[86,107)/1 crt=51'1027 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:43:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:44:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:00 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafana-apiserver t=2025-12-06T09:44:00.098588839Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec  6 04:44:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafana-apiserver t=2025-12-06T09:44:00.099196286Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec  6 04:44:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:44:00 np0005548915 ceph-mon[74327]: Reconfiguring crash.compute-1 (monmap changed)...
Dec  6 04:44:00 np0005548915 ceph-mon[74327]: Reconfiguring daemon crash.compute-1 on compute-1
Dec  6 04:44:00 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  6 04:44:00 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  6 04:44:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:00 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Dec  6 04:44:00 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Dec  6 04:44:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec  6 04:44:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  6 04:44:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:44:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:44:00 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Dec  6 04:44:00 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Dec  6 04:44:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:00 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec  6 04:44:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec  6 04:44:00 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec  6 04:44:00 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 109 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=107/108 n=7 ec=58/45 lis/c=107/86 les/c/f=108/87/0 sis=109 pruub=14.998575211s) [2] async=[2] r=-1 lpr=109 pi=[86,109)/1 crt=51'1027 mlcod 51'1027 active pruub 278.167938232s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:44:00 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 109 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=107/108 n=7 ec=58/45 lis/c=107/86 les/c/f=108/87/0 sis=109 pruub=14.997790337s) [2] r=-1 lpr=109 pi=[86,109)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 278.167938232s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:44:00 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 109 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=107/108 n=5 ec=58/45 lis/c=107/85 les/c/f=108/86/0 sis=109 pruub=14.995147705s) [2] async=[2] r=-1 lpr=109 pi=[85,109)/1 crt=51'1027 mlcod 51'1027 active pruub 278.166046143s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:44:00 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 109 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=107/108 n=5 ec=58/45 lis/c=107/85 les/c/f=108/86/0 sis=109 pruub=14.995041847s) [2] r=-1 lpr=109 pi=[85,109)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 278.166046143s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:44:00 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 109 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=2 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=109) [2]/[1] r=0 lpr=109 pi=[58,109)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:44:00 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 109 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=2 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=109) [2]/[1] r=0 lpr=109 pi=[58,109)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:44:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:00] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:44:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:00] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:44:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:01.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:01 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:44:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:01.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:01 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Dec  6 04:44:01 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:44:01 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Dec  6 04:44:01 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: Reconfiguring osd.0 (monmap changed)...
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: Reconfiguring daemon osd.0 on compute-1
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:01 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v39: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  6 04:44:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec  6 04:44:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:02 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec  6 04:44:02 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 110 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=109/110 n=2 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=109) [2]/[1] async=[2] r=0 lpr=109 pi=[58,109)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:44:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:02 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:02 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Dec  6 04:44:02 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:44:02 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Dec  6 04:44:02 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: Reconfiguring mon.compute-1 (monmap changed)...
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: Reconfiguring daemon mon.compute-1 on compute-1
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:02 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  6 04:44:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:02 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:44:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:03.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:44:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:03 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:03.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec  6 04:44:03 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 111 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=109/110 n=2 ec=58/45 lis/c=109/58 les/c/f=110/59/0 sis=111 pruub=14.630587578s) [2] async=[2] r=-1 lpr=111 pi=[58,111)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 280.370971680s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:44:03 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 111 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=109/110 n=2 ec=58/45 lis/c=109/58 les/c/f=110/59/0 sis=111 pruub=14.630526543s) [2] r=-1 lpr=111 pi=[58,111)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 280.370971680s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: Reconfiguring mon.compute-2 (monmap changed)...
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: Reconfiguring daemon mon.compute-2 on compute-2
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:03 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.oazbvn (monmap changed)...
Dec  6 04:44:03 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.oazbvn (monmap changed)...
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:44:03 np0005548915 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.oazbvn on compute-2
Dec  6 04:44:03 np0005548915 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.oazbvn on compute-2
Dec  6 04:44:03 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v42: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  6 04:44:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:44:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:04 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec  6 04:44:04 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec  6 04:44:04 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec  6 04:44:04 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:04 np0005548915 ceph-mgr[74618]: [prometheus INFO root] Restarting engine...
Dec  6 04:44:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:44:04] ENGINE Bus STOPPING
Dec  6 04:44:04 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:44:04] ENGINE Bus STOPPING
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec  6 04:44:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:44:04] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec  6 04:44:04 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:44:04] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec  6 04:44:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:44:04] ENGINE Bus STOPPED
Dec  6 04:44:04 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:44:04] ENGINE Bus STOPPED
Dec  6 04:44:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:44:04] ENGINE Bus STARTING
Dec  6 04:44:04 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:44:04] ENGINE Bus STARTING
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: Reconfiguring mgr.compute-2.oazbvn (monmap changed)...
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: Reconfiguring daemon mgr.compute-2.oazbvn on compute-2
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:04 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  6 04:44:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:44:04] ENGINE Serving on http://:::9283
Dec  6 04:44:04 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:44:04] ENGINE Serving on http://:::9283
Dec  6 04:44:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:44:04] ENGINE Bus STARTED
Dec  6 04:44:04 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:44:04] ENGINE Bus STARTED
Dec  6 04:44:04 np0005548915 ceph-mgr[74618]: [prometheus INFO root] Engine started.
Dec  6 04:44:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:04 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:05 np0005548915 podman[105245]: 2025-12-06 09:44:05.044914046 +0000 UTC m=+0.086824816 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:44:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:44:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:05.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:44:05 np0005548915 podman[105245]: 2025-12-06 09:44:05.147550096 +0000 UTC m=+0.189460856 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 04:44:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:05 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:44:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:05.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:44:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec  6 04:44:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec  6 04:44:05 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec  6 04:44:05 np0005548915 podman[105364]: 2025-12-06 09:44:05.783644508 +0000 UTC m=+0.073792101 container exec 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:44:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v45: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  6 04:44:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Dec  6 04:44:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  6 04:44:05 np0005548915 podman[105364]: 2025-12-06 09:44:05.795328855 +0000 UTC m=+0.085476508 container exec_died 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:44:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:06 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:06 np0005548915 podman[105456]: 2025-12-06 09:44:06.271642446 +0000 UTC m=+0.075992073 container exec f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:44:06 np0005548915 podman[105456]: 2025-12-06 09:44:06.287940686 +0000 UTC m=+0.092290083 container exec_died f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  6 04:44:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec  6 04:44:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  6 04:44:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  6 04:44:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec  6 04:44:06 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec  6 04:44:06 np0005548915 podman[105521]: 2025-12-06 09:44:06.596297002 +0000 UTC m=+0.070615429 container exec 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 04:44:06 np0005548915 podman[105521]: 2025-12-06 09:44:06.607762802 +0000 UTC m=+0.082081229 container exec_died 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 04:44:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:06 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094406 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:44:06 np0005548915 podman[105588]: 2025-12-06 09:44:06.903906945 +0000 UTC m=+0.070727011 container exec d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., release=1793, version=2.2.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2023-02-22T09:23:20)
Dec  6 04:44:06 np0005548915 podman[105588]: 2025-12-06 09:44:06.924075458 +0000 UTC m=+0.090895524 container exec_died d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, name=keepalived, com.redhat.component=keepalived-container, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793)
Dec  6 04:44:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:44:07.054Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003790559s
Dec  6 04:44:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 04:44:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:07.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 04:44:07 np0005548915 podman[105655]: 2025-12-06 09:44:07.232259929 +0000 UTC m=+0.079415623 container exec b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:44:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:07 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:07 np0005548915 podman[105655]: 2025-12-06 09:44:07.273425635 +0000 UTC m=+0.120581319 container exec_died b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:44:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:07.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec  6 04:44:07 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  6 04:44:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec  6 04:44:07 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec  6 04:44:07 np0005548915 podman[105727]: 2025-12-06 09:44:07.545652029 +0000 UTC m=+0.074620373 container exec fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:44:07 np0005548915 podman[105727]: 2025-12-06 09:44:07.760849977 +0000 UTC m=+0.289818301 container exec_died fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:44:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v48: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s; 27 B/s, 0 objects/s recovering
Dec  6 04:44:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:08 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:08 np0005548915 podman[105839]: 2025-12-06 09:44:08.335613058 +0000 UTC m=+0.101497169 container exec cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:44:08 np0005548915 podman[105839]: 2025-12-06 09:44:08.384666574 +0000 UTC m=+0.150550685 container exec_died cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:44:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:08 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:44:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:44:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:44:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:44:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:44:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:44:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:44:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:44:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:09.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:09 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:09.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:09 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:09 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:09 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:44:09 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:09 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:09 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:44:09 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v50: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 234 B/s rd, 0 B/s wr, 0 op/s; 25 B/s, 0 objects/s recovering
Dec  6 04:44:09 np0005548915 podman[105978]: 2025-12-06 09:44:09.836646232 +0000 UTC m=+0.065631064 container create 79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:44:09 np0005548915 systemd[1]: Started libpod-conmon-79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e.scope.
Dec  6 04:44:09 np0005548915 podman[105978]: 2025-12-06 09:44:09.816196622 +0000 UTC m=+0.045181484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:44:09 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:44:09 np0005548915 podman[105978]: 2025-12-06 09:44:09.939011425 +0000 UTC m=+0.167996267 container init 79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Dec  6 04:44:09 np0005548915 podman[105978]: 2025-12-06 09:44:09.946365907 +0000 UTC m=+0.175350739 container start 79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_maxwell, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:44:09 np0005548915 podman[105978]: 2025-12-06 09:44:09.950499967 +0000 UTC m=+0.179484829 container attach 79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_maxwell, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:44:09 np0005548915 magical_maxwell[105994]: 167 167
Dec  6 04:44:09 np0005548915 systemd[1]: libpod-79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e.scope: Deactivated successfully.
Dec  6 04:44:09 np0005548915 podman[105978]: 2025-12-06 09:44:09.956605033 +0000 UTC m=+0.185589865 container died 79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_maxwell, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  6 04:44:09 np0005548915 systemd[1]: var-lib-containers-storage-overlay-eb4aa4c0529a18da4e4e5e88b518ec15d8c325752d837b34cb601be999ceba9a-merged.mount: Deactivated successfully.
Dec  6 04:44:10 np0005548915 podman[105978]: 2025-12-06 09:44:10.00260513 +0000 UTC m=+0.231589962 container remove 79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  6 04:44:10 np0005548915 systemd[1]: libpod-conmon-79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e.scope: Deactivated successfully.
Dec  6 04:44:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec  6 04:44:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec  6 04:44:10 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec  6 04:44:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:10 np0005548915 podman[106017]: 2025-12-06 09:44:10.243943143 +0000 UTC m=+0.078421584 container create c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:44:10 np0005548915 systemd[1]: Started libpod-conmon-c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb.scope.
Dec  6 04:44:10 np0005548915 podman[106017]: 2025-12-06 09:44:10.213854224 +0000 UTC m=+0.048332715 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:44:10 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:44:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404fb9de33c4795d06d8b306877cf9f6ef8b2cd1aa92a7518e0d7b47e4ee5743/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:44:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404fb9de33c4795d06d8b306877cf9f6ef8b2cd1aa92a7518e0d7b47e4ee5743/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:44:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404fb9de33c4795d06d8b306877cf9f6ef8b2cd1aa92a7518e0d7b47e4ee5743/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:44:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404fb9de33c4795d06d8b306877cf9f6ef8b2cd1aa92a7518e0d7b47e4ee5743/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:44:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404fb9de33c4795d06d8b306877cf9f6ef8b2cd1aa92a7518e0d7b47e4ee5743/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:44:10 np0005548915 podman[106017]: 2025-12-06 09:44:10.359715922 +0000 UTC m=+0.194194413 container init c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  6 04:44:10 np0005548915 podman[106017]: 2025-12-06 09:44:10.378600347 +0000 UTC m=+0.213078758 container start c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 04:44:10 np0005548915 podman[106017]: 2025-12-06 09:44:10.382546611 +0000 UTC m=+0.217025102 container attach c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  6 04:44:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:10 np0005548915 romantic_sammet[106033]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:44:10 np0005548915 romantic_sammet[106033]: --> All data devices are unavailable
Dec  6 04:44:10 np0005548915 systemd[1]: libpod-c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb.scope: Deactivated successfully.
Dec  6 04:44:10 np0005548915 podman[106017]: 2025-12-06 09:44:10.81757354 +0000 UTC m=+0.652051981 container died c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:44:10 np0005548915 systemd[1]: var-lib-containers-storage-overlay-404fb9de33c4795d06d8b306877cf9f6ef8b2cd1aa92a7518e0d7b47e4ee5743-merged.mount: Deactivated successfully.
Dec  6 04:44:10 np0005548915 podman[106017]: 2025-12-06 09:44:10.87510364 +0000 UTC m=+0.709582031 container remove c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  6 04:44:10 np0005548915 systemd[1]: libpod-conmon-c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb.scope: Deactivated successfully.
Dec  6 04:44:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:10] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:44:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:10] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:44:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec  6 04:44:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec  6 04:44:11 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec  6 04:44:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:11.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:11 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:11.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:11 np0005548915 podman[106157]: 2025-12-06 09:44:11.554605284 +0000 UTC m=+0.051486377 container create cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  6 04:44:11 np0005548915 systemd[1]: Started libpod-conmon-cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27.scope.
Dec  6 04:44:11 np0005548915 podman[106157]: 2025-12-06 09:44:11.535919164 +0000 UTC m=+0.032800267 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:44:11 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:44:11 np0005548915 podman[106157]: 2025-12-06 09:44:11.647568576 +0000 UTC m=+0.144449749 container init cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meitner, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  6 04:44:11 np0005548915 podman[106157]: 2025-12-06 09:44:11.659552831 +0000 UTC m=+0.156433924 container start cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meitner, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  6 04:44:11 np0005548915 podman[106157]: 2025-12-06 09:44:11.663521045 +0000 UTC m=+0.160402218 container attach cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meitner, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:44:11 np0005548915 romantic_meitner[106174]: 167 167
Dec  6 04:44:11 np0005548915 systemd[1]: libpod-cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27.scope: Deactivated successfully.
Dec  6 04:44:11 np0005548915 podman[106157]: 2025-12-06 09:44:11.672808734 +0000 UTC m=+0.169689847 container died cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  6 04:44:11 np0005548915 systemd[1]: var-lib-containers-storage-overlay-5317eb87989b42d36bd195eea2855d8b89587beaebe60ee619d0458b116567a7-merged.mount: Deactivated successfully.
Dec  6 04:44:11 np0005548915 podman[106157]: 2025-12-06 09:44:11.723754533 +0000 UTC m=+0.220635656 container remove cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:44:11 np0005548915 systemd[1]: libpod-conmon-cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27.scope: Deactivated successfully.
Dec  6 04:44:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v53: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 237 B/s rd, 0 B/s wr, 0 op/s; 25 B/s, 0 objects/s recovering
Dec  6 04:44:11 np0005548915 podman[106203]: 2025-12-06 09:44:11.94063393 +0000 UTC m=+0.059567330 container create 66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:44:11 np0005548915 systemd[1]: Started libpod-conmon-66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4.scope.
Dec  6 04:44:12 np0005548915 podman[106203]: 2025-12-06 09:44:11.922811016 +0000 UTC m=+0.041744376 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:44:12 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:44:12 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b954f5b6dfc4603852d7e9ad4cb0faeff377408757d2e6cfbc3df8d619d266e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:44:12 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b954f5b6dfc4603852d7e9ad4cb0faeff377408757d2e6cfbc3df8d619d266e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:44:12 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b954f5b6dfc4603852d7e9ad4cb0faeff377408757d2e6cfbc3df8d619d266e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:44:12 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b954f5b6dfc4603852d7e9ad4cb0faeff377408757d2e6cfbc3df8d619d266e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:44:12 np0005548915 podman[106203]: 2025-12-06 09:44:12.056089241 +0000 UTC m=+0.175022701 container init 66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cerf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:44:12 np0005548915 podman[106203]: 2025-12-06 09:44:12.075238083 +0000 UTC m=+0.194171443 container start 66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:44:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:12 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:12 np0005548915 podman[106203]: 2025-12-06 09:44:12.112324073 +0000 UTC m=+0.231257473 container attach 66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cerf, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]: {
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:    "1": [
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:        {
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:            "devices": [
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:                "/dev/loop3"
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:            ],
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:            "lv_name": "ceph_lv0",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:            "lv_size": "21470642176",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:            "name": "ceph_lv0",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:            "tags": {
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:                "ceph.cluster_name": "ceph",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:                "ceph.crush_device_class": "",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:                "ceph.encrypted": "0",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:                "ceph.osd_id": "1",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:                "ceph.type": "block",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:                "ceph.vdo": "0",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:                "ceph.with_tpm": "0"
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:            },
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:            "type": "block",
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:            "vg_name": "ceph_vg0"
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:        }
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]:    ]
Dec  6 04:44:12 np0005548915 stupefied_cerf[106219]: }
Dec  6 04:44:12 np0005548915 systemd[1]: libpod-66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4.scope: Deactivated successfully.
Dec  6 04:44:12 np0005548915 podman[106240]: 2025-12-06 09:44:12.471031132 +0000 UTC m=+0.025987472 container died 66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:44:12 np0005548915 systemd[1]: var-lib-containers-storage-overlay-2b954f5b6dfc4603852d7e9ad4cb0faeff377408757d2e6cfbc3df8d619d266e-merged.mount: Deactivated successfully.
Dec  6 04:44:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:12 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:13.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:13 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:13 np0005548915 podman[106240]: 2025-12-06 09:44:13.276411114 +0000 UTC m=+0.831367444 container remove 66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cerf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  6 04:44:13 np0005548915 systemd[1]: libpod-conmon-66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4.scope: Deactivated successfully.
Dec  6 04:44:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:44:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:13.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:44:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v54: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec  6 04:44:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Dec  6 04:44:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  6 04:44:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 04:44:13 np0005548915 podman[106366]: 2025-12-06 09:44:13.93693823 +0000 UTC m=+0.051092737 container create 864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:44:13 np0005548915 systemd[1]: Started libpod-conmon-864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4.scope.
Dec  6 04:44:13 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:44:14 np0005548915 podman[106366]: 2025-12-06 09:44:13.915425509 +0000 UTC m=+0.029580036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:44:14 np0005548915 podman[106366]: 2025-12-06 09:44:14.013901622 +0000 UTC m=+0.128056199 container init 864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_volhard, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 04:44:14 np0005548915 podman[106366]: 2025-12-06 09:44:14.019689825 +0000 UTC m=+0.133844342 container start 864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  6 04:44:14 np0005548915 podman[106366]: 2025-12-06 09:44:14.023082116 +0000 UTC m=+0.137236723 container attach 864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 04:44:14 np0005548915 keen_volhard[106383]: 167 167
Dec  6 04:44:14 np0005548915 systemd[1]: libpod-864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4.scope: Deactivated successfully.
Dec  6 04:44:14 np0005548915 conmon[106383]: conmon 864841bb5d69267caad9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4.scope/container/memory.events
Dec  6 04:44:14 np0005548915 podman[106366]: 2025-12-06 09:44:14.028445208 +0000 UTC m=+0.142599725 container died 864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_volhard, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:44:14 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a3d3c7249c99ce26d04507b5a3f7206697f441047ddbdf459ec816e682df2b09-merged.mount: Deactivated successfully.
Dec  6 04:44:14 np0005548915 podman[106366]: 2025-12-06 09:44:14.07071892 +0000 UTC m=+0.184873427 container remove 864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  6 04:44:14 np0005548915 systemd[1]: libpod-conmon-864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4.scope: Deactivated successfully.
Dec  6 04:44:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:14 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:14 np0005548915 podman[106406]: 2025-12-06 09:44:14.245558508 +0000 UTC m=+0.054588379 container create 969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  6 04:44:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec  6 04:44:14 np0005548915 systemd[1]: Started libpod-conmon-969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9.scope.
Dec  6 04:44:14 np0005548915 podman[106406]: 2025-12-06 09:44:14.217179875 +0000 UTC m=+0.026209746 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:44:14 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:44:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  6 04:44:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec  6 04:44:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86163c71c3ef66d222140f509d688f133c012e5e3cc89276ee76f6f5bcc8ae94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:44:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86163c71c3ef66d222140f509d688f133c012e5e3cc89276ee76f6f5bcc8ae94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:44:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86163c71c3ef66d222140f509d688f133c012e5e3cc89276ee76f6f5bcc8ae94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:44:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86163c71c3ef66d222140f509d688f133c012e5e3cc89276ee76f6f5bcc8ae94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:44:14 np0005548915 podman[106406]: 2025-12-06 09:44:14.34925631 +0000 UTC m=+0.158286211 container init 969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ritchie, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:44:14 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec  6 04:44:14 np0005548915 podman[106406]: 2025-12-06 09:44:14.358836744 +0000 UTC m=+0.167866625 container start 969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:44:14 np0005548915 podman[106406]: 2025-12-06 09:44:14.363105427 +0000 UTC m=+0.172135298 container attach 969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ritchie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:44:14 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  6 04:44:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:14 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:15 np0005548915 lvm[106497]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:44:15 np0005548915 lvm[106497]: VG ceph_vg0 finished
Dec  6 04:44:15 np0005548915 charming_ritchie[106422]: {}
Dec  6 04:44:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:15.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:15 np0005548915 systemd[1]: libpod-969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9.scope: Deactivated successfully.
Dec  6 04:44:15 np0005548915 podman[106406]: 2025-12-06 09:44:15.185271912 +0000 UTC m=+0.994301753 container died 969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:44:15 np0005548915 systemd[1]: libpod-969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9.scope: Consumed 1.349s CPU time.
Dec  6 04:44:15 np0005548915 systemd[1]: var-lib-containers-storage-overlay-86163c71c3ef66d222140f509d688f133c012e5e3cc89276ee76f6f5bcc8ae94-merged.mount: Deactivated successfully.
Dec  6 04:44:15 np0005548915 podman[106406]: 2025-12-06 09:44:15.239650784 +0000 UTC m=+1.048680625 container remove 969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:44:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:15 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:15 np0005548915 systemd[1]: libpod-conmon-969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9.scope: Deactivated successfully.
Dec  6 04:44:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:44:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:44:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec  6 04:44:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:15.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec  6 04:44:15 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec  6 04:44:15 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  6 04:44:15 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:15 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:44:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v57: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 0 op/s; 19 B/s, 0 objects/s recovering
Dec  6 04:44:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Dec  6 04:44:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  6 04:44:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:16 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec  6 04:44:16 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  6 04:44:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  6 04:44:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec  6 04:44:16 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec  6 04:44:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:16 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:44:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:17.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:44:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:17 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:17.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec  6 04:44:17 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  6 04:44:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec  6 04:44:17 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec  6 04:44:17 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v60: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Dec  6 04:44:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Dec  6 04:44:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  6 04:44:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:18 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003d80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec  6 04:44:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  6 04:44:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec  6 04:44:18 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  6 04:44:18 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec  6 04:44:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:18 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:44:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:19.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:19 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:19.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:19 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v62: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 234 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Dec  6 04:44:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec  6 04:44:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  6 04:44:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec  6 04:44:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:20 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:20 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:20] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:44:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:20] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:44:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:44:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:21.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:44:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:21 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:44:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:21.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:44:21 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  6 04:44:21 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v63: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Dec  6 04:44:21 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec  6 04:44:21 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  6 04:44:21 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  6 04:44:21 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec  6 04:44:21 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec  6 04:44:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:22 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:22 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:22 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  6 04:44:22 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  6 04:44:22 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  6 04:44:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec  6 04:44:22 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  6 04:44:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec  6 04:44:22 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec  6 04:44:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:23.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:23 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:44:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:23.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:44:23
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['vms', '.nfs', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'volumes', '.rgw.root', 'default.rgw.control']
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v66: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 04:44:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Dec  6 04:44:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  6 04:44:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:44:23 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  6 04:44:23 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:44:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:44:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  6 04:44:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec  6 04:44:23 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec  6 04:44:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:44:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:44:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:44:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:44:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:44:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:44:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:44:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:44:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:24 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  6 04:44:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:44:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:25.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:44:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:25 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:25.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:25 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v68: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 04:44:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Dec  6 04:44:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  6 04:44:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec  6 04:44:25 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  6 04:44:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  6 04:44:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec  6 04:44:25 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec  6 04:44:25 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 127 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=89/90 n=7 ec=58/45 lis/c=89/89 les/c/f=90/90/0 sis=127 pruub=14.457291603s) [0] r=-1 lpr=127 pi=[89,127)/1 crt=51'1027 mlcod 0'0 active pruub 302.696716309s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:44:25 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 127 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=89/90 n=7 ec=58/45 lis/c=89/89 les/c/f=90/90/0 sis=127 pruub=14.457220078s) [0] r=-1 lpr=127 pi=[89,127)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 302.696716309s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:44:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003de0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003de0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec  6 04:44:26 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  6 04:44:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec  6 04:44:27 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec  6 04:44:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 128 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=89/90 n=7 ec=58/45 lis/c=89/89 les/c/f=90/90/0 sis=128) [0]/[1] r=0 lpr=128 pi=[89,128)/1 crt=51'1027 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:44:27 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 128 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=89/90 n=7 ec=58/45 lis/c=89/89 les/c/f=90/90/0 sis=128) [0]/[1] r=0 lpr=128 pi=[89,128)/1 crt=51'1027 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:44:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:27.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:27 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:27.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:27 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v71: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 207 B/s rd, 0 op/s
Dec  6 04:44:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec  6 04:44:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec  6 04:44:28 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec  6 04:44:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:28 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 129 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=128/129 n=7 ec=58/45 lis/c=89/89 les/c/f=90/90/0 sis=128) [0]/[1] async=[0] r=0 lpr=128 pi=[89,128)/1 crt=51'1027 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:44:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:44:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec  6 04:44:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec  6 04:44:28 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec  6 04:44:28 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 130 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=128/129 n=7 ec=58/45 lis/c=128/89 les/c/f=129/90/0 sis=130 pruub=15.355307579s) [0] async=[0] r=-1 lpr=130 pi=[89,130)/1 crt=51'1027 mlcod 51'1027 active pruub 306.535247803s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:44:28 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 130 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=128/129 n=7 ec=58/45 lis/c=128/89 les/c/f=129/90/0 sis=130 pruub=15.355225563s) [0] r=-1 lpr=130 pi=[89,130)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 306.535247803s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:44:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:29.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:29 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:29.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:29 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v74: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:44:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec  6 04:44:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec  6 04:44:29 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec  6 04:44:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:30 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:30 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:30] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:44:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:30] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:44:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:44:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:44:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:31 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:31.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:31 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v76: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 213 B/s rd, 0 op/s
Dec  6 04:44:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:32 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:32 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:44:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:33.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:44:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:33 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:44:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:33.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:44:33 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v77: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Dec  6 04:44:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Dec  6 04:44:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  6 04:44:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:44:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec  6 04:44:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:34 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  6 04:44:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec  6 04:44:34 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec  6 04:44:34 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  6 04:44:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:34 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094434 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:44:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  6 04:44:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:35.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:35 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:35.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:35 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v79: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 294 B/s rd, 0 op/s; 15 B/s, 1 objects/s recovering
Dec  6 04:44:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Dec  6 04:44:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  6 04:44:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:36 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec  6 04:44:36 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  6 04:44:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  6 04:44:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec  6 04:44:36 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec  6 04:44:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 133 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=97/98 n=2 ec=58/45 lis/c=97/97 les/c/f=98/98/0 sis=133 pruub=13.667246819s) [0] r=-1 lpr=133 pi=[97,133)/1 crt=51'1027 mlcod 0'0 active pruub 312.198425293s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:44:36 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 133 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=97/98 n=2 ec=58/45 lis/c=97/97 les/c/f=98/98/0 sis=133 pruub=13.667204857s) [0] r=-1 lpr=133 pi=[97,133)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 312.198425293s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:44:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:36 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:37.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec  6 04:44:37 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  6 04:44:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:37 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:37.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec  6 04:44:37 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec  6 04:44:37 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 134 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=97/98 n=2 ec=58/45 lis/c=97/97 les/c/f=98/98/0 sis=134) [0]/[1] r=0 lpr=134 pi=[97,134)/1 crt=51'1027 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:44:37 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 134 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=97/98 n=2 ec=58/45 lis/c=97/97 les/c/f=98/98/0 sis=134) [0]/[1] r=0 lpr=134 pi=[97,134)/1 crt=51'1027 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:44:37 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v82: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Dec  6 04:44:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:38 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec  6 04:44:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec  6 04:44:38 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec  6 04:44:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:38 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:44:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:44:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:44:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:39.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:39 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 135 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=134/135 n=2 ec=58/45 lis/c=97/97 les/c/f=98/98/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[97,134)/1 crt=51'1027 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:44:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:39.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec  6 04:44:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec  6 04:44:39 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec  6 04:44:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 136 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=134/135 n=2 ec=58/45 lis/c=134/97 les/c/f=135/98/0 sis=136 pruub=15.714330673s) [0] async=[0] r=-1 lpr=136 pi=[97,136)/1 crt=51'1027 mlcod 51'1027 active pruub 317.670806885s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:44:39 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 136 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=134/135 n=2 ec=58/45 lis/c=134/97 les/c/f=135/98/0 sis=136 pruub=15.714257240s) [0] r=-1 lpr=136 pi=[97,136)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 317.670806885s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:44:39 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v85: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:44:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:40 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec  6 04:44:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec  6 04:44:40 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec  6 04:44:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:40 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:40] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:44:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:40] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:44:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:44:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:41.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:44:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:41 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:41.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:41 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v87: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:44:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:42 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:42 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:43.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:43 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:44:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:43.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:44:43 np0005548915 python3.9[106772]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:44:43 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v88: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s; 18 B/s, 0 objects/s recovering
Dec  6 04:44:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:44 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:44 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:44:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Dec  6 04:44:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  6 04:44:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:44:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec  6 04:44:44 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  6 04:44:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  6 04:44:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec  6 04:44:44 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec  6 04:44:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:44 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:44:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:45.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:44:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:45 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:44:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:45.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:44:45 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  6 04:44:45 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v90: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 829 B/s rd, 165 B/s wr, 1 op/s; 17 B/s, 0 objects/s recovering
Dec  6 04:44:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Dec  6 04:44:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec  6 04:44:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:46 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:46 np0005548915 python3.9[107086]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  6 04:44:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec  6 04:44:46 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec  6 04:44:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  6 04:44:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec  6 04:44:46 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec  6 04:44:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:46 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:44:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:47.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:44:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:47 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:44:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:47 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:44:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:47 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:47 np0005548915 python3.9[107239]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  6 04:44:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:47 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:44:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:47.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  6 04:44:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v92: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 1.6 KiB/s wr, 5 op/s; 15 B/s, 0 objects/s recovering
Dec  6 04:44:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Dec  6 04:44:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  6 04:44:48 np0005548915 python3.9[107393]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:44:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:48 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec  6 04:44:48 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  6 04:44:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  6 04:44:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec  6 04:44:48 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec  6 04:44:48 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 140 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=5 ec=58/45 lis/c=80/80 les/c/f=81/81/0 sis=140 pruub=13.577693939s) [2] r=-1 lpr=140 pi=[80,140)/1 crt=51'1027 mlcod 0'0 active pruub 324.368438721s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:44:48 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 140 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=5 ec=58/45 lis/c=80/80 les/c/f=81/81/0 sis=140 pruub=13.577651024s) [2] r=-1 lpr=140 pi=[80,140)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 324.368438721s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:44:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:48 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:44:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec  6 04:44:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec  6 04:44:48 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec  6 04:44:48 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 141 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=5 ec=58/45 lis/c=80/80 les/c/f=81/81/0 sis=141) [2]/[1] r=0 lpr=141 pi=[80,141)/1 crt=51'1027 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:44:48 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 141 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=5 ec=58/45 lis/c=80/80 les/c/f=81/81/0 sis=141) [2]/[1] r=0 lpr=141 pi=[80,141)/1 crt=51'1027 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 04:44:49 np0005548915 python3.9[107545]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  6 04:44:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:49.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:49 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:44:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:49.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:44:49 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  6 04:44:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v95: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 1.9 KiB/s wr, 6 op/s
Dec  6 04:44:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  6 04:44:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:44:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec  6 04:44:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:44:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec  6 04:44:49 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 142 pg[10.1f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=109/109 les/c/f=110/110/0 sis=142) [1] r=0 lpr=142 pi=[109,142)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:44:49 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec  6 04:44:49 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 142 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=141/142 n=5 ec=58/45 lis/c=80/80 les/c/f=81/81/0 sis=141) [2]/[1] async=[2] r=0 lpr=141 pi=[80,141)/1 crt=51'1027 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:44:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:50 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:50 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:44:50 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 04:44:50 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 04:44:50 np0005548915 python3.9[107699]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:44:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:50 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Dec  6 04:44:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Dec  6 04:44:50 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Dec  6 04:44:50 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 143 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=141/142 n=5 ec=58/45 lis/c=141/80 les/c/f=142/81/0 sis=143 pruub=15.002619743s) [2] async=[2] r=-1 lpr=143 pi=[80,143)/1 crt=51'1027 mlcod 51'1027 active pruub 328.184936523s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:44:50 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 143 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=141/142 n=5 ec=58/45 lis/c=141/80 les/c/f=142/81/0 sis=143 pruub=15.002544403s) [2] r=-1 lpr=143 pi=[80,143)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 328.184936523s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 04:44:50 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 143 pg[10.1f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=109/109 les/c/f=110/110/0 sis=143) [1]/[2] r=-1 lpr=143 pi=[109,143)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:44:50 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 143 pg[10.1f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=109/109 les/c/f=110/110/0 sis=143) [1]/[2] r=-1 lpr=143 pi=[109,143)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 04:44:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:50] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:44:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:50] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:44:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:44:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:51.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:44:51 np0005548915 python3.9[107852]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:44:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:51 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:51.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:51 np0005548915 python3.9[107931]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:44:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v98: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec  6 04:44:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Dec  6 04:44:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Dec  6 04:44:52 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Dec  6 04:44:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:52 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:52 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Dec  6 04:44:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Dec  6 04:44:53 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec  6 04:44:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 145 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=143/109 les/c/f=144/110/0 sis=145) [1] r=0 lpr=145 pi=[109,145)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  6 04:44:53 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 145 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=143/109 les/c/f=144/110/0 sis=145) [1] r=0 lpr=145 pi=[109,145)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 04:44:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:53.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:53 np0005548915 python3.9[108084]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:44:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:53 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:53.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v101: 337 pgs: 1 activating+remapped, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 1 op/s; 5/224 objects misplaced (2.232%); 0 B/s, 1 objects/s recovering
Dec  6 04:44:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:44:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:44:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:44:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:44:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:44:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:44:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f35c6140610>)]
Dec  6 04:44:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  6 04:44:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:44:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f35c6140700>)]
Dec  6 04:44:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  6 04:44:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Dec  6 04:44:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Dec  6 04:44:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:54 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:54 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Dec  6 04:44:54 np0005548915 ceph-osd[82803]: osd.1 pg_epoch: 146 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=145/146 n=5 ec=58/45 lis/c=143/109 les/c/f=144/110/0 sis=145) [1] r=0 lpr=145 pi=[109,145)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 04:44:54 np0005548915 python3.9[108239]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  6 04:44:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:54 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:55 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e35: compute-0.qhdjwa(active, since 92s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec  6 04:44:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:55.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:55 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:44:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:55.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:44:55 np0005548915 python3.9[108394]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  6 04:44:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v103: 337 pgs: 1 activating+remapped, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 413 B/s rd, 206 B/s wr, 1 op/s; 5/224 objects misplaced (2.232%); 0 B/s, 1 objects/s recovering
Dec  6 04:44:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:56 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:56 np0005548915 python3.9[108547]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  6 04:44:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:56 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094456 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:44:57 np0005548915 ceph-mgr[74618]: [dashboard INFO request] [192.168.122.100:58224] [POST] [200] [0.148s] [4.0B] [3cb85339-82e1-47be-b992-8a94186ac764] /api/prometheus_receiver
Dec  6 04:44:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:57.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:57 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:44:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:57.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:44:57 np0005548915 python3.9[108704]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  6 04:44:57 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v104: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 682 B/s wr, 1 op/s; 18 B/s, 1 objects/s recovering
Dec  6 04:44:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:58 np0005548915 python3.9[108856]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:44:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:44:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:44:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:59.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:44:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:59 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:44:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:44:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:44:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:59.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:44:59 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v105: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 397 B/s rd, 529 B/s wr, 1 op/s; 14 B/s, 1 objects/s recovering
Dec  6 04:45:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:00 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:00 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:00] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Dec  6 04:45:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:00] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Dec  6 04:45:01 np0005548915 python3.9[109011]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:45:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:01.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:01 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:01.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:01 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v106: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 117 B/s rd, 353 B/s wr, 0 op/s; 12 B/s, 0 objects/s recovering
Dec  6 04:45:01 np0005548915 python3.9[109165]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:45:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:02 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:02 np0005548915 python3.9[109243]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:45:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:02 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:03 np0005548915 python3.9[109396]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:45:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:03.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:03 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:03.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:03 np0005548915 python3.9[109475]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:45:03 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v107: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 307 B/s wr, 0 op/s; 10 B/s, 0 objects/s recovering
Dec  6 04:45:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:45:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:04 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:04 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:04 np0005548915 python3.9[109627]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:45:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:05.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:05 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:05.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v108: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 262 B/s rd, 262 B/s wr, 0 op/s; 9 B/s, 0 objects/s recovering
Dec  6 04:45:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:06 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:06 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:06.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:45:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:07.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:07 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:07 np0005548915 python3.9[109806]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:45:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:45:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:07.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:45:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v109: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 255 B/s wr, 0 op/s; 9 B/s, 0 objects/s recovering
Dec  6 04:45:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:08 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:08 np0005548915 python3.9[109959]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  6 04:45:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=infra.usagestats t=2025-12-06T09:45:08.573091254Z level=info msg="Usage stats are ready to report"
Dec  6 04:45:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:08 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:45:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:45:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:45:08 np0005548915 python3.9[110109]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:45:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:09.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:09 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:09.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:09 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v110: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:45:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:10 np0005548915 python3.9[110264]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:45:10 np0005548915 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  6 04:45:10 np0005548915 systemd[1]: tuned.service: Deactivated successfully.
Dec  6 04:45:10 np0005548915 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  6 04:45:10 np0005548915 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  6 04:45:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:10 np0005548915 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  6 04:45:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:10] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Dec  6 04:45:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:10] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Dec  6 04:45:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:11.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:11 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:11.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:11 np0005548915 python3.9[110427]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  6 04:45:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v111: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:45:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:12 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:12 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:13.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:13 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:13.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v112: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 04:45:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:45:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:14 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:14 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:15.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:15 np0005548915 python3.9[110582]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:45:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:15 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:15.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v113: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:45:16 np0005548915 python3.9[110768]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:45:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:16 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:16 np0005548915 podman[110884]: 2025-12-06 09:45:16.355815321 +0000 UTC m=+0.063097275 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  6 04:45:16 np0005548915 podman[110884]: 2025-12-06 09:45:16.450790491 +0000 UTC m=+0.158072425 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:45:16 np0005548915 systemd[1]: session-39.scope: Deactivated successfully.
Dec  6 04:45:16 np0005548915 systemd[1]: session-39.scope: Consumed 1min 8.492s CPU time.
Dec  6 04:45:16 np0005548915 systemd-logind[795]: Session 39 logged out. Waiting for processes to exit.
Dec  6 04:45:16 np0005548915 systemd-logind[795]: Removed session 39.
Dec  6 04:45:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:16 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:16 np0005548915 podman[111002]: 2025-12-06 09:45:16.931365192 +0000 UTC m=+0.059583551 container exec 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:45:16 np0005548915 podman[111002]: 2025-12-06 09:45:16.938918043 +0000 UTC m=+0.067136352 container exec_died 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:45:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:16.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:45:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:17.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:17 np0005548915 podman[111095]: 2025-12-06 09:45:17.300878407 +0000 UTC m=+0.053166182 container exec f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:45:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:17 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248004b80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:17 np0005548915 podman[111095]: 2025-12-06 09:45:17.308807227 +0000 UTC m=+0.061094982 container exec_died f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:45:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:17.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:17 np0005548915 podman[111159]: 2025-12-06 09:45:17.582209951 +0000 UTC m=+0.085802797 container exec 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 04:45:17 np0005548915 podman[111159]: 2025-12-06 09:45:17.592867304 +0000 UTC m=+0.096460150 container exec_died 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 04:45:17 np0005548915 podman[111225]: 2025-12-06 09:45:17.810969244 +0000 UTC m=+0.053137423 container exec d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git)
Dec  6 04:45:17 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v114: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 04:45:17 np0005548915 podman[111225]: 2025-12-06 09:45:17.828980101 +0000 UTC m=+0.071148230 container exec_died d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, description=keepalived for Ceph, distribution-scope=public, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, release=1793, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph.)
Dec  6 04:45:18 np0005548915 podman[111290]: 2025-12-06 09:45:18.02397101 +0000 UTC m=+0.046290411 container exec b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:45:18 np0005548915 podman[111290]: 2025-12-06 09:45:18.044938246 +0000 UTC m=+0.067257637 container exec_died b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:45:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:18 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:18 np0005548915 podman[111365]: 2025-12-06 09:45:18.274189495 +0000 UTC m=+0.055511816 container exec fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:45:18 np0005548915 podman[111365]: 2025-12-06 09:45:18.446919482 +0000 UTC m=+0.228241783 container exec_died fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:45:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:18 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:45:18 np0005548915 podman[111477]: 2025-12-06 09:45:18.872594006 +0000 UTC m=+0.067569865 container exec cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:45:18 np0005548915 podman[111477]: 2025-12-06 09:45:18.917380086 +0000 UTC m=+0.112355955 container exec_died cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:45:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:45:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:45:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:45:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:19.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:19 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:45:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:19.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094519 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:45:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:45:19 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v115: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:45:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:20 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:20 np0005548915 podman[111693]: 2025-12-06 09:45:20.138152575 +0000 UTC m=+0.040640931 container create 477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:45:20 np0005548915 systemd[1]: Started libpod-conmon-477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c.scope.
Dec  6 04:45:20 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:45:20 np0005548915 podman[111693]: 2025-12-06 09:45:20.12106165 +0000 UTC m=+0.023550016 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:45:20 np0005548915 podman[111693]: 2025-12-06 09:45:20.216893536 +0000 UTC m=+0.119381902 container init 477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Dec  6 04:45:20 np0005548915 podman[111693]: 2025-12-06 09:45:20.224686192 +0000 UTC m=+0.127174538 container start 477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  6 04:45:20 np0005548915 podman[111693]: 2025-12-06 09:45:20.227638351 +0000 UTC m=+0.130126697 container attach 477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:45:20 np0005548915 jovial_nobel[111709]: 167 167
Dec  6 04:45:20 np0005548915 systemd[1]: libpod-477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c.scope: Deactivated successfully.
Dec  6 04:45:20 np0005548915 podman[111693]: 2025-12-06 09:45:20.229981783 +0000 UTC m=+0.132470119 container died 477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 04:45:20 np0005548915 systemd[1]: var-lib-containers-storage-overlay-81f86bb6b20a460998717dbe826df23c7509eb14f42214d1291577976353c2ac-merged.mount: Deactivated successfully.
Dec  6 04:45:20 np0005548915 podman[111693]: 2025-12-06 09:45:20.273651053 +0000 UTC m=+0.176139399 container remove 477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  6 04:45:20 np0005548915 systemd[1]: libpod-conmon-477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c.scope: Deactivated successfully.
Dec  6 04:45:20 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:45:20 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:45:20 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:45:20 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:45:20 np0005548915 podman[111733]: 2025-12-06 09:45:20.414284227 +0000 UTC m=+0.039467269 container create 41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_poitras, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:45:20 np0005548915 systemd[1]: Started libpod-conmon-41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b.scope.
Dec  6 04:45:20 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:45:20 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19da158c533f40e17284f27148715ebf949c7f51d479e2e0d005325cb1ba34c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:20 np0005548915 podman[111733]: 2025-12-06 09:45:20.399581837 +0000 UTC m=+0.024764899 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:45:20 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19da158c533f40e17284f27148715ebf949c7f51d479e2e0d005325cb1ba34c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:20 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19da158c533f40e17284f27148715ebf949c7f51d479e2e0d005325cb1ba34c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:20 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19da158c533f40e17284f27148715ebf949c7f51d479e2e0d005325cb1ba34c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:20 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19da158c533f40e17284f27148715ebf949c7f51d479e2e0d005325cb1ba34c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:20 np0005548915 podman[111733]: 2025-12-06 09:45:20.506938808 +0000 UTC m=+0.132121880 container init 41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 04:45:20 np0005548915 podman[111733]: 2025-12-06 09:45:20.515391202 +0000 UTC m=+0.140574244 container start 41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  6 04:45:20 np0005548915 podman[111733]: 2025-12-06 09:45:20.518601047 +0000 UTC m=+0.143784109 container attach 41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 04:45:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:20 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:20] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Dec  6 04:45:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:20] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Dec  6 04:45:20 np0005548915 eloquent_poitras[111749]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:45:20 np0005548915 eloquent_poitras[111749]: --> All data devices are unavailable
Dec  6 04:45:20 np0005548915 systemd[1]: libpod-41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b.scope: Deactivated successfully.
Dec  6 04:45:21 np0005548915 podman[111764]: 2025-12-06 09:45:21.002997691 +0000 UTC m=+0.028578290 container died 41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  6 04:45:21 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c19da158c533f40e17284f27148715ebf949c7f51d479e2e0d005325cb1ba34c-merged.mount: Deactivated successfully.
Dec  6 04:45:21 np0005548915 podman[111764]: 2025-12-06 09:45:21.046911128 +0000 UTC m=+0.072491717 container remove 41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:45:21 np0005548915 systemd[1]: libpod-conmon-41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b.scope: Deactivated successfully.
Dec  6 04:45:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:21.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:21 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:21.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:21 np0005548915 podman[111871]: 2025-12-06 09:45:21.657616496 +0000 UTC m=+0.049223819 container create 57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 04:45:21 np0005548915 systemd[1]: Started libpod-conmon-57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6.scope.
Dec  6 04:45:21 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:45:21 np0005548915 podman[111871]: 2025-12-06 09:45:21.638754754 +0000 UTC m=+0.030362107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:45:21 np0005548915 podman[111871]: 2025-12-06 09:45:21.743122876 +0000 UTC m=+0.134730199 container init 57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_roentgen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  6 04:45:21 np0005548915 podman[111871]: 2025-12-06 09:45:21.748607242 +0000 UTC m=+0.140214565 container start 57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_roentgen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  6 04:45:21 np0005548915 podman[111871]: 2025-12-06 09:45:21.752181127 +0000 UTC m=+0.143788460 container attach 57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  6 04:45:21 np0005548915 crazy_roentgen[111887]: 167 167
Dec  6 04:45:21 np0005548915 systemd[1]: libpod-57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6.scope: Deactivated successfully.
Dec  6 04:45:21 np0005548915 podman[111871]: 2025-12-06 09:45:21.75531766 +0000 UTC m=+0.146924953 container died 57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_roentgen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  6 04:45:21 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c673074b7af76dd53150732d4348de36afd92f23ebe08569ec1c0fb69c33f3d4-merged.mount: Deactivated successfully.
Dec  6 04:45:21 np0005548915 podman[111871]: 2025-12-06 09:45:21.788625365 +0000 UTC m=+0.180232668 container remove 57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_roentgen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:45:21 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v116: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:45:21 np0005548915 systemd[1]: libpod-conmon-57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6.scope: Deactivated successfully.
Dec  6 04:45:21 np0005548915 podman[111911]: 2025-12-06 09:45:21.952679091 +0000 UTC m=+0.039274204 container create aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  6 04:45:21 np0005548915 systemd[1]: Started libpod-conmon-aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af.scope.
Dec  6 04:45:22 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:45:22 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64529ca356e6a5ea4f99c2f5175114ada3ee9750663f1772d2583c44d7c68ae6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:22 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64529ca356e6a5ea4f99c2f5175114ada3ee9750663f1772d2583c44d7c68ae6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:22 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64529ca356e6a5ea4f99c2f5175114ada3ee9750663f1772d2583c44d7c68ae6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:22 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64529ca356e6a5ea4f99c2f5175114ada3ee9750663f1772d2583c44d7c68ae6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:22 np0005548915 podman[111911]: 2025-12-06 09:45:22.025720751 +0000 UTC m=+0.112315884 container init aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:45:22 np0005548915 podman[111911]: 2025-12-06 09:45:21.93679815 +0000 UTC m=+0.023393283 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:45:22 np0005548915 podman[111911]: 2025-12-06 09:45:22.036857997 +0000 UTC m=+0.123453100 container start aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  6 04:45:22 np0005548915 podman[111911]: 2025-12-06 09:45:22.040601277 +0000 UTC m=+0.127196410 container attach aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  6 04:45:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:22 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92240019e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]: {
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:    "1": [
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:        {
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:            "devices": [
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:                "/dev/loop3"
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:            ],
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:            "lv_name": "ceph_lv0",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:            "lv_size": "21470642176",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:            "name": "ceph_lv0",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:            "tags": {
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:                "ceph.cluster_name": "ceph",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:                "ceph.crush_device_class": "",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:                "ceph.encrypted": "0",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:                "ceph.osd_id": "1",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:                "ceph.type": "block",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:                "ceph.vdo": "0",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:                "ceph.with_tpm": "0"
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:            },
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:            "type": "block",
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:            "vg_name": "ceph_vg0"
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:        }
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]:    ]
Dec  6 04:45:22 np0005548915 distracted_jemison[111928]: }
Dec  6 04:45:22 np0005548915 systemd[1]: libpod-aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af.scope: Deactivated successfully.
Dec  6 04:45:22 np0005548915 podman[111911]: 2025-12-06 09:45:22.36761195 +0000 UTC m=+0.454207103 container died aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  6 04:45:22 np0005548915 systemd[1]: var-lib-containers-storage-overlay-64529ca356e6a5ea4f99c2f5175114ada3ee9750663f1772d2583c44d7c68ae6-merged.mount: Deactivated successfully.
Dec  6 04:45:22 np0005548915 podman[111911]: 2025-12-06 09:45:22.418965884 +0000 UTC m=+0.505561017 container remove aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:45:22 np0005548915 systemd[1]: libpod-conmon-aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af.scope: Deactivated successfully.
Dec  6 04:45:22 np0005548915 systemd-logind[795]: New session 40 of user zuul.
Dec  6 04:45:22 np0005548915 systemd[1]: Started Session 40 of User zuul.
Dec  6 04:45:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:22 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:22 np0005548915 podman[112097]: 2025-12-06 09:45:22.985067008 +0000 UTC m=+0.045711285 container create 1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jang, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True)
Dec  6 04:45:23 np0005548915 systemd[1]: Started libpod-conmon-1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6.scope.
Dec  6 04:45:23 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:45:23 np0005548915 podman[112097]: 2025-12-06 09:45:22.969043532 +0000 UTC m=+0.029687829 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:45:23 np0005548915 podman[112097]: 2025-12-06 09:45:23.07027081 +0000 UTC m=+0.130915147 container init 1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 04:45:23 np0005548915 podman[112097]: 2025-12-06 09:45:23.07664472 +0000 UTC m=+0.137288997 container start 1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:45:23 np0005548915 podman[112097]: 2025-12-06 09:45:23.080472281 +0000 UTC m=+0.141116598 container attach 1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  6 04:45:23 np0005548915 jovial_jang[112114]: 167 167
Dec  6 04:45:23 np0005548915 systemd[1]: libpod-1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6.scope: Deactivated successfully.
Dec  6 04:45:23 np0005548915 podman[112097]: 2025-12-06 09:45:23.082022052 +0000 UTC m=+0.142666329 container died 1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jang, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:45:23 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ede4d6a50d2390706b3694e995d68729d3d5a026e76e57b9e1e756f1dd5a40d2-merged.mount: Deactivated successfully.
Dec  6 04:45:23 np0005548915 podman[112097]: 2025-12-06 09:45:23.118570533 +0000 UTC m=+0.179214810 container remove 1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 04:45:23 np0005548915 systemd[1]: libpod-conmon-1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6.scope: Deactivated successfully.
Dec  6 04:45:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:23.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:23 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:23 np0005548915 podman[112163]: 2025-12-06 09:45:23.258282873 +0000 UTC m=+0.024488881 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:45:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:23.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:23 np0005548915 podman[112163]: 2025-12-06 09:45:23.441676134 +0000 UTC m=+0.207882132 container create 774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_engelbart, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:45:23 np0005548915 systemd[1]: Started libpod-conmon-774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93.scope.
Dec  6 04:45:23 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:45:23 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d2974db8a986fd96dcc7c95636657d41b23906f2449e56a081bc8071882242/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:23 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d2974db8a986fd96dcc7c95636657d41b23906f2449e56a081bc8071882242/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:23 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d2974db8a986fd96dcc7c95636657d41b23906f2449e56a081bc8071882242/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:23 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d2974db8a986fd96dcc7c95636657d41b23906f2449e56a081bc8071882242/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:23 np0005548915 podman[112163]: 2025-12-06 09:45:23.535350021 +0000 UTC m=+0.301556029 container init 774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:45:23 np0005548915 podman[112163]: 2025-12-06 09:45:23.547359611 +0000 UTC m=+0.313565579 container start 774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_engelbart, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:45:23 np0005548915 podman[112163]: 2025-12-06 09:45:23.55074384 +0000 UTC m=+0.316949818 container attach 774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_engelbart, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:45:23
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['.nfs', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.control', 'vms', 'volumes', 'images', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta']
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:45:23 np0005548915 python3.9[112250]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v117: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 04:45:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:45:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:45:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:45:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:45:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:24 np0005548915 lvm[112357]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:45:24 np0005548915 lvm[112357]: VG ceph_vg0 finished
Dec  6 04:45:24 np0005548915 youthful_engelbart[112255]: {}
Dec  6 04:45:24 np0005548915 systemd[1]: libpod-774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93.scope: Deactivated successfully.
Dec  6 04:45:24 np0005548915 systemd[1]: libpod-774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93.scope: Consumed 1.182s CPU time.
Dec  6 04:45:24 np0005548915 podman[112163]: 2025-12-06 09:45:24.327909778 +0000 UTC m=+1.094115756 container died 774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:45:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:45:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:45:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:45:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:45:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:45:24 np0005548915 systemd[1]: var-lib-containers-storage-overlay-30d2974db8a986fd96dcc7c95636657d41b23906f2449e56a081bc8071882242-merged.mount: Deactivated successfully.
Dec  6 04:45:24 np0005548915 podman[112163]: 2025-12-06 09:45:24.374082555 +0000 UTC m=+1.140288523 container remove 774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_engelbart, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:45:24 np0005548915 systemd[1]: libpod-conmon-774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93.scope: Deactivated successfully.
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.468881) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014324468907, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2731, "num_deletes": 252, "total_data_size": 7144487, "memory_usage": 7337744, "flush_reason": "Manual Compaction"}
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014324536525, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 6722042, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8213, "largest_seqno": 10943, "table_properties": {"data_size": 6708894, "index_size": 8490, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3589, "raw_key_size": 31688, "raw_average_key_size": 22, "raw_value_size": 6680720, "raw_average_value_size": 4688, "num_data_blocks": 370, "num_entries": 1425, "num_filter_entries": 1425, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765014196, "oldest_key_time": 1765014196, "file_creation_time": 1765014324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 67786 microseconds, and 12382 cpu microseconds.
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.536660) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 6722042 bytes OK
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.536713) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.539334) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.539372) EVENT_LOG_v1 {"time_micros": 1765014324539362, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.539398) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 7131934, prev total WAL file size 7167784, number of live WAL files 2.
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.542647) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(6564KB)], [23(12MB)]
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014324542682, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 19465119, "oldest_snapshot_seqno": -1}
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4122 keys, 14793120 bytes, temperature: kUnknown
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014324704689, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 14793120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14759382, "index_size": 22300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 105124, "raw_average_key_size": 25, "raw_value_size": 14677813, "raw_average_value_size": 3560, "num_data_blocks": 957, "num_entries": 4122, "num_filter_entries": 4122, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765014324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.704970) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14793120 bytes
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.706605) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.1 rd, 91.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.4, 12.2 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(5.1) write-amplify(2.2) OK, records in: 4658, records dropped: 536 output_compression: NoCompression
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.706637) EVENT_LOG_v1 {"time_micros": 1765014324706622, "job": 8, "event": "compaction_finished", "compaction_time_micros": 162088, "compaction_time_cpu_micros": 31255, "output_level": 6, "num_output_files": 1, "total_output_size": 14793120, "num_input_records": 4658, "num_output_records": 4122, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014324708539, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014324712535, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.542582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.712622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.712629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.712632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.712634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:45:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.712637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:45:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92240019e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:25 np0005548915 python3.9[112524]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  6 04:45:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:25.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:25 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:25.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:25 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:45:25 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v118: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:45:26 np0005548915 python3.9[112704]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:45:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:26 np0005548915 python3.9[112788]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  6 04:45:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:26.954Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:45:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:26.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:45:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:27.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:27 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:27.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:27 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:45:27 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v119: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Dec  6 04:45:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210004370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:45:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:29.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:29 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:29 np0005548915 python3.9[112944]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:45:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:29.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:29 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v120: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  6 04:45:30 np0005548915 kernel: ganesha.nfsd[107241]: segfault at 50 ip 00007f92fb6f332e sp 00007f92b3ffe210 error 4 in libntirpc.so.5.8[7f92fb6d8000+2c000] likely on CPU 2 (core 0, socket 2)
Dec  6 04:45:30 np0005548915 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  6 04:45:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:30 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy ignored for local
Dec  6 04:45:30 np0005548915 systemd[1]: Created slice Slice /system/systemd-coredump.
Dec  6 04:45:30 np0005548915 systemd[1]: Started Process Core Dump (PID 112947/UID 0).
Dec  6 04:45:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:30] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Dec  6 04:45:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:30] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Dec  6 04:45:31 np0005548915 systemd-coredump[112948]: Process 95701 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 66:#012#0  0x00007f92fb6f332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  6 04:45:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:31 np0005548915 systemd[1]: systemd-coredump@0-112947-0.service: Deactivated successfully.
Dec  6 04:45:31 np0005548915 systemd[1]: systemd-coredump@0-112947-0.service: Consumed 1.070s CPU time.
Dec  6 04:45:31 np0005548915 podman[113031]: 2025-12-06 09:45:31.356515381 +0000 UTC m=+0.031199780 container died f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 04:45:31 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d86bdd0ce43374acaba3604e849843759161821197ac361242f7e120fad089e4-merged.mount: Deactivated successfully.
Dec  6 04:45:31 np0005548915 podman[113031]: 2025-12-06 09:45:31.398349702 +0000 UTC m=+0.073034091 container remove f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 04:45:31 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec  6 04:45:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:31.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:31 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec  6 04:45:31 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.105s CPU time.
Dec  6 04:45:31 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v121: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  6 04:45:31 np0005548915 python3.9[113151]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 04:45:32 np0005548915 python3.9[113304]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:45:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:33.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:33.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:33 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v122: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 852 B/s wr, 2 op/s
Dec  6 04:45:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:45:33 np0005548915 python3.9[113458]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  6 04:45:34 np0005548915 python3.9[113608]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:45:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:35.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:35.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:35 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v123: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 852 B/s wr, 2 op/s
Dec  6 04:45:35 np0005548915 python3.9[113768]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:45:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094536 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:45:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:36.955Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:45:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:36.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:45:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:37.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:37.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:37 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v124: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:45:38 np0005548915 python3.9[113923]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:45:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:45:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:45:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:45:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:39.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:39.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094539 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:45:39 np0005548915 python3.9[114212]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  6 04:45:39 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v125: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s
Dec  6 04:45:40 np0005548915 python3.9[114362]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:45:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:40] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 04:45:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:40] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 04:45:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:41.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:41.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:41 np0005548915 python3.9[114517]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:45:41 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 1.
Dec  6 04:45:41 np0005548915 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:45:41 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.105s CPU time.
Dec  6 04:45:41 np0005548915 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:45:41 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v126: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s
Dec  6 04:45:41 np0005548915 podman[114568]: 2025-12-06 09:45:41.811791382 +0000 UTC m=+0.026448382 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:45:42 np0005548915 podman[114568]: 2025-12-06 09:45:42.715069851 +0000 UTC m=+0.929726861 container create 71b960c2881dae640010500027e5d1d1d92a7645608040694f265079ad808565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:45:42 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabb0cde1bea459da73db8d32b6287e894193b4e586a1a5b19dc4de874364fbb/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:42 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabb0cde1bea459da73db8d32b6287e894193b4e586a1a5b19dc4de874364fbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:42 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabb0cde1bea459da73db8d32b6287e894193b4e586a1a5b19dc4de874364fbb/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:42 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabb0cde1bea459da73db8d32b6287e894193b4e586a1a5b19dc4de874364fbb/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:45:42 np0005548915 podman[114568]: 2025-12-06 09:45:42.801075254 +0000 UTC m=+1.015732244 container init 71b960c2881dae640010500027e5d1d1d92a7645608040694f265079ad808565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:45:42 np0005548915 podman[114568]: 2025-12-06 09:45:42.80768185 +0000 UTC m=+1.022338820 container start 71b960c2881dae640010500027e5d1d1d92a7645608040694f265079ad808565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:45:42 np0005548915 bash[114568]: 71b960c2881dae640010500027e5d1d1d92a7645608040694f265079ad808565
Dec  6 04:45:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  6 04:45:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  6 04:45:42 np0005548915 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:45:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  6 04:45:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  6 04:45:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  6 04:45:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  6 04:45:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  6 04:45:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:45:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:43.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:43.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:43 np0005548915 python3.9[114778]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:45:43 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v127: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s
Dec  6 04:45:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:45:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:45.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:45.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:45 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v128: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:45:46 np0005548915 python3.9[114958]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:45:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:46.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:45:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:47.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:47.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v129: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Dec  6 04:45:47 np0005548915 python3.9[115114]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec  6 04:45:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:45:49 np0005548915 systemd[1]: session-40.scope: Deactivated successfully.
Dec  6 04:45:49 np0005548915 systemd[1]: session-40.scope: Consumed 18.394s CPU time.
Dec  6 04:45:49 np0005548915 systemd-logind[795]: Session 40 logged out. Waiting for processes to exit.
Dec  6 04:45:49 np0005548915 systemd-logind[795]: Removed session 40.
Dec  6 04:45:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:49.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:49.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:49 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:45:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:49 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:45:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v130: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec  6 04:45:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:50] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 04:45:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:50] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 04:45:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:51.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:51.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v131: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec  6 04:45:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:53.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:53.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v132: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:45:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:45:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:45:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:45:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:45:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:45:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:45:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:45:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:45:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:45:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:55.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:55.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v133: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0e4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:56.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:45:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:57.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:57 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:57.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:57 np0005548915 systemd-logind[795]: New session 41 of user zuul.
Dec  6 04:45:57 np0005548915 systemd[1]: Started Session 41 of User zuul.
Dec  6 04:45:57 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v134: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 04:45:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:58 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:58 np0005548915 python3.9[115319]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:45:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:45:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:58 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:45:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:59.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:45:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:59 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:45:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:45:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:45:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:59.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:45:59 np0005548915 python3.9[115475]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:45:59 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v135: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 04:46:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094600 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:46:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:00 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:00 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:00] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec  6 04:46:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:00] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec  6 04:46:01 np0005548915 python3.9[115668]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:46:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:01.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:01 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:01.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:01 np0005548915 systemd[1]: session-41.scope: Deactivated successfully.
Dec  6 04:46:01 np0005548915 systemd[1]: session-41.scope: Consumed 2.851s CPU time.
Dec  6 04:46:01 np0005548915 systemd-logind[795]: Session 41 logged out. Waiting for processes to exit.
Dec  6 04:46:01 np0005548915 systemd-logind[795]: Removed session 41.
Dec  6 04:46:01 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v136: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 04:46:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:02 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:02 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:03.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:03 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:46:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:03.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:46:03 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v137: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 04:46:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:46:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:04 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:04 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:05.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:05 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:05.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v138: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:46:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:06 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:06 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:06.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:46:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:07.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:07 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:07.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v139: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:46:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:08 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:08 np0005548915 systemd-logind[795]: New session 42 of user zuul.
Dec  6 04:46:08 np0005548915 systemd[1]: Started Session 42 of User zuul.
Dec  6 04:46:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:46:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:08 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:46:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:46:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:09.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:09 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:09.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:09 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v140: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:46:09 np0005548915 python3.9[115882]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:46:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:10 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:10 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:10] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  6 04:46:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:10] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  6 04:46:11 np0005548915 python3.9[116036]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:46:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:11.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:11 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:11.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v141: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:46:12 np0005548915 python3.9[116194]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:46:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:12 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:12 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:13 np0005548915 python3.9[116278]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:46:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:13.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:13 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:13.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v142: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:46:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:46:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:14 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:14 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:15.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:15 np0005548915 python3.9[116434]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:46:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:15 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:15.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v143: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:46:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:16 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:16 np0005548915 python3.9[116630]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:46:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:16 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:16.960Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:46:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:16.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:46:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:17.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:17 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:17.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:17 np0005548915 python3.9[116784]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:46:17 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v144: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:46:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:18 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:18 np0005548915 python3.9[116948]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:46:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:46:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:18 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:19 np0005548915 python3.9[117026]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:46:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:19.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:19 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:19.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:19 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v145: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:46:19 np0005548915 python3.9[117180]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:46:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:20 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:20 np0005548915 python3.9[117258]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:46:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:20 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:20] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  6 04:46:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:20] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  6 04:46:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:21.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:21 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:21.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:21 np0005548915 python3.9[117412]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:46:21 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v146: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:46:22 np0005548915 python3.9[117564]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:46:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:22 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:22 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:22 np0005548915 python3.9[117716]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:46:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:23.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:23 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:23.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:23 np0005548915 python3.9[117870]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:46:23
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.control', '.nfs', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'backups']
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v147: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:46:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:46:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:46:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:46:23 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:46:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:24 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:46:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:46:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:46:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:46:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:46:24 np0005548915 python3.9[118022]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:46:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:24 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:25.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:25 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:46:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:46:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:46:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:46:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:46:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:25.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:25 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v148: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:46:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:46:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:26 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:46:26 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:46:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:46:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:46:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:46:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:46:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:46:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:46:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:46:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:26 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:26.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:46:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:26.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:46:27 np0005548915 podman[118254]: 2025-12-06 09:46:27.219008304 +0000 UTC m=+0.069373371 container create 34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_leavitt, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 04:46:27 np0005548915 systemd[1]: Started libpod-conmon-34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2.scope.
Dec  6 04:46:27 np0005548915 podman[118254]: 2025-12-06 09:46:27.193935472 +0000 UTC m=+0.044300629 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:46:27 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:46:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:27.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:27 np0005548915 podman[118254]: 2025-12-06 09:46:27.341170159 +0000 UTC m=+0.191535256 container init 34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec  6 04:46:27 np0005548915 podman[118254]: 2025-12-06 09:46:27.354798835 +0000 UTC m=+0.205163922 container start 34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:46:27 np0005548915 podman[118254]: 2025-12-06 09:46:27.3598725 +0000 UTC m=+0.210237567 container attach 34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 04:46:27 np0005548915 peaceful_leavitt[118318]: 167 167
Dec  6 04:46:27 np0005548915 systemd[1]: libpod-34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2.scope: Deactivated successfully.
Dec  6 04:46:27 np0005548915 conmon[118318]: conmon 34f836033009bb670541 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2.scope/container/memory.events
Dec  6 04:46:27 np0005548915 podman[118254]: 2025-12-06 09:46:27.364926756 +0000 UTC m=+0.215291853 container died 34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:46:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:27 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:27 np0005548915 systemd[1]: var-lib-containers-storage-overlay-31e48c2653377118af497828cca76dabdcdf1dcea2c17e27d9488df8bb277a42-merged.mount: Deactivated successfully.
Dec  6 04:46:27 np0005548915 podman[118254]: 2025-12-06 09:46:27.418194144 +0000 UTC m=+0.268559201 container remove 34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Dec  6 04:46:27 np0005548915 systemd[1]: libpod-conmon-34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2.scope: Deactivated successfully.
Dec  6 04:46:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:27.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:27 np0005548915 podman[118418]: 2025-12-06 09:46:27.599538245 +0000 UTC m=+0.051089330 container create 918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nightingale, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:46:27 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:46:27 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:46:27 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:46:27 np0005548915 systemd[1]: Started libpod-conmon-918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6.scope.
Dec  6 04:46:27 np0005548915 podman[118418]: 2025-12-06 09:46:27.578765329 +0000 UTC m=+0.030316414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:46:27 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:46:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cecda963e8889bee1ad20b1e1ee6479c8e74f8263eec144f8693d6a5d6f3aefe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cecda963e8889bee1ad20b1e1ee6479c8e74f8263eec144f8693d6a5d6f3aefe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cecda963e8889bee1ad20b1e1ee6479c8e74f8263eec144f8693d6a5d6f3aefe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cecda963e8889bee1ad20b1e1ee6479c8e74f8263eec144f8693d6a5d6f3aefe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cecda963e8889bee1ad20b1e1ee6479c8e74f8263eec144f8693d6a5d6f3aefe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:27 np0005548915 podman[118418]: 2025-12-06 09:46:27.729321444 +0000 UTC m=+0.180872549 container init 918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:46:27 np0005548915 podman[118418]: 2025-12-06 09:46:27.736644031 +0000 UTC m=+0.188195096 container start 918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:46:27 np0005548915 podman[118418]: 2025-12-06 09:46:27.73994634 +0000 UTC m=+0.191497455 container attach 918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nightingale, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:46:27 np0005548915 python3.9[118412]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:46:27 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v149: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:46:28 np0005548915 inspiring_nightingale[118435]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:46:28 np0005548915 inspiring_nightingale[118435]: --> All data devices are unavailable
Dec  6 04:46:28 np0005548915 systemd[1]: libpod-918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6.scope: Deactivated successfully.
Dec  6 04:46:28 np0005548915 podman[118418]: 2025-12-06 09:46:28.131767974 +0000 UTC m=+0.583319049 container died 918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:46:28 np0005548915 systemd[1]: var-lib-containers-storage-overlay-cecda963e8889bee1ad20b1e1ee6479c8e74f8263eec144f8693d6a5d6f3aefe-merged.mount: Deactivated successfully.
Dec  6 04:46:28 np0005548915 podman[118418]: 2025-12-06 09:46:28.175765714 +0000 UTC m=+0.627316779 container remove 918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nightingale, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:46:28 np0005548915 systemd[1]: libpod-conmon-918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6.scope: Deactivated successfully.
Dec  6 04:46:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:28 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:28 np0005548915 python3.9[118640]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:46:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:46:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:28 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:28 np0005548915 podman[118733]: 2025-12-06 09:46:28.865657379 +0000 UTC m=+0.034711012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:46:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:46:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:29.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:46:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:29 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:46:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:29.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:46:29 np0005548915 python3.9[118876]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:46:29 np0005548915 podman[118733]: 2025-12-06 09:46:29.778641944 +0000 UTC m=+0.947695477 container create 255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:46:29 np0005548915 systemd[90433]: Created slice User Background Tasks Slice.
Dec  6 04:46:29 np0005548915 systemd[90433]: Starting Cleanup of User's Temporary Files and Directories...
Dec  6 04:46:29 np0005548915 systemd[90433]: Finished Cleanup of User's Temporary Files and Directories.
Dec  6 04:46:29 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v150: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:46:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:30 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:30 np0005548915 systemd[1]: Started libpod-conmon-255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442.scope.
Dec  6 04:46:30 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:46:30 np0005548915 python3.9[119029]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:46:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:30] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 04:46:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:30] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 04:46:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:30 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:31.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:31 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:31.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:31 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v151: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:46:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:32 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:32 np0005548915 python3.9[119190]: ansible-service_facts Invoked
Dec  6 04:46:32 np0005548915 network[119208]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 04:46:32 np0005548915 network[119209]: 'network-scripts' will be removed from distribution in near future.
Dec  6 04:46:32 np0005548915 network[119210]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 04:46:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:32 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:33.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:33 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:33.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:33 np0005548915 podman[118733]: 2025-12-06 09:46:33.644846962 +0000 UTC m=+4.813900615 container init 255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_burnell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  6 04:46:33 np0005548915 podman[118733]: 2025-12-06 09:46:33.660916963 +0000 UTC m=+4.829970506 container start 255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:46:33 np0005548915 festive_burnell[119033]: 167 167
Dec  6 04:46:33 np0005548915 systemd[1]: libpod-255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442.scope: Deactivated successfully.
Dec  6 04:46:33 np0005548915 podman[118733]: 2025-12-06 09:46:33.816645108 +0000 UTC m=+4.985698671 container attach 255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_burnell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:46:33 np0005548915 podman[118733]: 2025-12-06 09:46:33.817624164 +0000 UTC m=+4.986677728 container died 255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_burnell, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  6 04:46:33 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v152: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:46:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:46:33 np0005548915 systemd[1]: var-lib-containers-storage-overlay-e056fa3eac6f1bbe8c35fcf8ed15bad3cd3fbd6b6da6d8b4ceabef6c7ee386b9-merged.mount: Deactivated successfully.
Dec  6 04:46:33 np0005548915 podman[118733]: 2025-12-06 09:46:33.89171881 +0000 UTC m=+5.060772353 container remove 255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_burnell, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec  6 04:46:33 np0005548915 systemd[1]: libpod-conmon-255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442.scope: Deactivated successfully.
Dec  6 04:46:34 np0005548915 podman[119271]: 2025-12-06 09:46:34.062245482 +0000 UTC m=+0.051239794 container create 4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_elion, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  6 04:46:34 np0005548915 systemd[1]: Started libpod-conmon-4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b.scope.
Dec  6 04:46:34 np0005548915 podman[119271]: 2025-12-06 09:46:34.039130883 +0000 UTC m=+0.028125225 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:46:34 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:46:34 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86fdaa3ea7485a092cbb949ea56515bb858b58be0cc720c8588d21bc31b5472/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:34 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86fdaa3ea7485a092cbb949ea56515bb858b58be0cc720c8588d21bc31b5472/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:34 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86fdaa3ea7485a092cbb949ea56515bb858b58be0cc720c8588d21bc31b5472/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:34 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86fdaa3ea7485a092cbb949ea56515bb858b58be0cc720c8588d21bc31b5472/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:34 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:34 np0005548915 podman[119271]: 2025-12-06 09:46:34.323843165 +0000 UTC m=+0.312837497 container init 4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:46:34 np0005548915 podman[119271]: 2025-12-06 09:46:34.329553218 +0000 UTC m=+0.318547530 container start 4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:46:34 np0005548915 podman[119271]: 2025-12-06 09:46:34.335409546 +0000 UTC m=+0.324403868 container attach 4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:46:34 np0005548915 serene_elion[119292]: {
Dec  6 04:46:34 np0005548915 serene_elion[119292]:    "1": [
Dec  6 04:46:34 np0005548915 serene_elion[119292]:        {
Dec  6 04:46:34 np0005548915 serene_elion[119292]:            "devices": [
Dec  6 04:46:34 np0005548915 serene_elion[119292]:                "/dev/loop3"
Dec  6 04:46:34 np0005548915 serene_elion[119292]:            ],
Dec  6 04:46:34 np0005548915 serene_elion[119292]:            "lv_name": "ceph_lv0",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:            "lv_size": "21470642176",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:            "name": "ceph_lv0",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:            "tags": {
Dec  6 04:46:34 np0005548915 serene_elion[119292]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:                "ceph.cluster_name": "ceph",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:                "ceph.crush_device_class": "",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:                "ceph.encrypted": "0",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:                "ceph.osd_id": "1",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:                "ceph.type": "block",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:                "ceph.vdo": "0",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:                "ceph.with_tpm": "0"
Dec  6 04:46:34 np0005548915 serene_elion[119292]:            },
Dec  6 04:46:34 np0005548915 serene_elion[119292]:            "type": "block",
Dec  6 04:46:34 np0005548915 serene_elion[119292]:            "vg_name": "ceph_vg0"
Dec  6 04:46:34 np0005548915 serene_elion[119292]:        }
Dec  6 04:46:34 np0005548915 serene_elion[119292]:    ]
Dec  6 04:46:34 np0005548915 serene_elion[119292]: }
Dec  6 04:46:34 np0005548915 systemd[1]: libpod-4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b.scope: Deactivated successfully.
Dec  6 04:46:34 np0005548915 podman[119271]: 2025-12-06 09:46:34.61606884 +0000 UTC m=+0.605063152 container died 4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  6 04:46:34 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d86fdaa3ea7485a092cbb949ea56515bb858b58be0cc720c8588d21bc31b5472-merged.mount: Deactivated successfully.
Dec  6 04:46:34 np0005548915 podman[119271]: 2025-12-06 09:46:34.710322746 +0000 UTC m=+0.699317058 container remove 4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_elion, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  6 04:46:34 np0005548915 systemd[1]: libpod-conmon-4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b.scope: Deactivated successfully.
Dec  6 04:46:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:34 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:35 np0005548915 podman[119411]: 2025-12-06 09:46:35.220525244 +0000 UTC m=+0.021746863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:46:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:35.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:35 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:46:35 np0005548915 podman[119411]: 2025-12-06 09:46:35.407166288 +0000 UTC m=+0.208387887 container create 601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  6 04:46:35 np0005548915 systemd[1]: Started libpod-conmon-601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d.scope.
Dec  6 04:46:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:35.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:35 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:46:35 np0005548915 podman[119411]: 2025-12-06 09:46:35.525173321 +0000 UTC m=+0.326394960 container init 601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kepler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:46:35 np0005548915 podman[119411]: 2025-12-06 09:46:35.53294342 +0000 UTC m=+0.334165019 container start 601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:46:35 np0005548915 podman[119411]: 2025-12-06 09:46:35.536982758 +0000 UTC m=+0.338204357 container attach 601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kepler, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  6 04:46:35 np0005548915 magical_kepler[119442]: 167 167
Dec  6 04:46:35 np0005548915 systemd[1]: libpod-601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d.scope: Deactivated successfully.
Dec  6 04:46:35 np0005548915 conmon[119442]: conmon 601ba157cc55cc25a746 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d.scope/container/memory.events
Dec  6 04:46:35 np0005548915 podman[119411]: 2025-12-06 09:46:35.539954667 +0000 UTC m=+0.341176266 container died 601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:46:35 np0005548915 systemd[1]: var-lib-containers-storage-overlay-611793e3e6e0182aead80eaef4dee6fee38d78a3d6e81192beed2f617d4eb8d8-merged.mount: Deactivated successfully.
Dec  6 04:46:35 np0005548915 podman[119411]: 2025-12-06 09:46:35.845949351 +0000 UTC m=+0.647170980 container remove 601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:46:35 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v153: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:46:35 np0005548915 systemd[1]: libpod-conmon-601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d.scope: Deactivated successfully.
Dec  6 04:46:36 np0005548915 podman[119491]: 2025-12-06 09:46:36.016444902 +0000 UTC m=+0.025056392 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:46:36 np0005548915 kernel: ganesha.nfsd[115161]: segfault at 50 ip 00007fe19309632e sp 00007fe154ff8210 error 4 in libntirpc.so.5.8[7fe19307b000+2c000] likely on CPU 7 (core 0, socket 7)
Dec  6 04:46:36 np0005548915 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  6 04:46:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:36 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0003c10 fd 38 proxy ignored for local
Dec  6 04:46:36 np0005548915 podman[119491]: 2025-12-06 09:46:36.219852985 +0000 UTC m=+0.228464485 container create 86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  6 04:46:36 np0005548915 systemd[1]: Started Process Core Dump (PID 119517/UID 0).
Dec  6 04:46:36 np0005548915 systemd[1]: Started libpod-conmon-86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000.scope.
Dec  6 04:46:36 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:46:36 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c101c4c41b1fc102709c26565a9b3ae16e2800a79717e83ca3131c41a761ecf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:36 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c101c4c41b1fc102709c26565a9b3ae16e2800a79717e83ca3131c41a761ecf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:36 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c101c4c41b1fc102709c26565a9b3ae16e2800a79717e83ca3131c41a761ecf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:36 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c101c4c41b1fc102709c26565a9b3ae16e2800a79717e83ca3131c41a761ecf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:36 np0005548915 podman[119491]: 2025-12-06 09:46:36.455590205 +0000 UTC m=+0.464201745 container init 86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  6 04:46:36 np0005548915 podman[119491]: 2025-12-06 09:46:36.465045299 +0000 UTC m=+0.473656759 container start 86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chaplygin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  6 04:46:36 np0005548915 podman[119491]: 2025-12-06 09:46:36.60724274 +0000 UTC m=+0.615854390 container attach 86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  6 04:46:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:36.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:46:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:36.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:46:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:36.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:46:37 np0005548915 lvm[119638]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:46:37 np0005548915 lvm[119638]: VG ceph_vg0 finished
Dec  6 04:46:37 np0005548915 hopeful_chaplygin[119523]: {}
Dec  6 04:46:37 np0005548915 lvm[119643]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:46:37 np0005548915 lvm[119643]: VG ceph_vg0 finished
Dec  6 04:46:37 np0005548915 systemd[1]: libpod-86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000.scope: Deactivated successfully.
Dec  6 04:46:37 np0005548915 podman[119491]: 2025-12-06 09:46:37.26408806 +0000 UTC m=+1.272699520 container died 86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:46:37 np0005548915 systemd[1]: libpod-86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000.scope: Consumed 1.208s CPU time.
Dec  6 04:46:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:37.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:37.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:37 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v154: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 04:46:37 np0005548915 systemd-coredump[119519]: Process 114587 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 52:#012#0  0x00007fe19309632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  6 04:46:37 np0005548915 systemd[1]: var-lib-containers-storage-overlay-4c101c4c41b1fc102709c26565a9b3ae16e2800a79717e83ca3131c41a761ecf-merged.mount: Deactivated successfully.
Dec  6 04:46:37 np0005548915 systemd[1]: systemd-coredump@1-119517-0.service: Deactivated successfully.
Dec  6 04:46:37 np0005548915 systemd[1]: systemd-coredump@1-119517-0.service: Consumed 1.162s CPU time.
Dec  6 04:46:38 np0005548915 podman[119491]: 2025-12-06 09:46:38.131905155 +0000 UTC m=+2.140516615 container remove 86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chaplygin, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Dec  6 04:46:38 np0005548915 podman[119779]: 2025-12-06 09:46:38.146728812 +0000 UTC m=+0.081630359 container died 71b960c2881dae640010500027e5d1d1d92a7645608040694f265079ad808565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:46:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:46:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:46:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:46:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:46:39 np0005548915 python3.9[119990]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:46:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:39.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:39.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:39 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v155: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:46:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:46:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:46:39 np0005548915 systemd[1]: var-lib-containers-storage-overlay-aabb0cde1bea459da73db8d32b6287e894193b4e586a1a5b19dc4de874364fbb-merged.mount: Deactivated successfully.
Dec  6 04:46:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:46:40 np0005548915 podman[119779]: 2025-12-06 09:46:40.095678141 +0000 UTC m=+2.030579688 container remove 71b960c2881dae640010500027e5d1d1d92a7645608040694f265079ad808565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 04:46:40 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec  6 04:46:40 np0005548915 systemd[1]: libpod-conmon-86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000.scope: Deactivated successfully.
Dec  6 04:46:40 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec  6 04:46:40 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.507s CPU time.
Dec  6 04:46:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:40] "GET /metrics HTTP/1.1" 200 48187 "" "Prometheus/2.51.0"
Dec  6 04:46:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:40] "GET /metrics HTTP/1.1" 200 48187 "" "Prometheus/2.51.0"
Dec  6 04:46:41 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:46:41 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:46:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:41.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:41.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:41 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v156: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:46:42 np0005548915 python3.9[120202]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  6 04:46:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094642 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:46:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:43.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:43.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:43 np0005548915 python3.9[120356]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:46:43 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v157: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:46:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:46:44 np0005548915 python3.9[120434]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:46:44 np0005548915 python3.9[120586]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:46:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:45.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:45 np0005548915 python3.9[120666]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:46:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:45.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:45 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v158: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:46:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:46.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:46:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:46:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:47.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:46:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:47.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:47 np0005548915 python3.9[120845]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:46:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v159: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 04:46:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:46:49 np0005548915 python3.9[120998]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:46:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:49.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:49.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v160: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:46:50 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 2.
Dec  6 04:46:50 np0005548915 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:46:50 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.507s CPU time.
Dec  6 04:46:50 np0005548915 python3.9[121083]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:46:50 np0005548915 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:46:50 np0005548915 podman[121156]: 2025-12-06 09:46:50.615540684 +0000 UTC m=+0.048280315 container create 110de08b0faf0070bf966f79c685b3e90821d04d13d1192b43b0dcfdec88a2e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:46:50 np0005548915 podman[121156]: 2025-12-06 09:46:50.589669921 +0000 UTC m=+0.022409582 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:46:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b803fd6ab17911ab7240c5d02c6dbaeae752811fd5893ad82ed4c49a9721f1c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b803fd6ab17911ab7240c5d02c6dbaeae752811fd5893ad82ed4c49a9721f1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b803fd6ab17911ab7240c5d02c6dbaeae752811fd5893ad82ed4c49a9721f1c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b803fd6ab17911ab7240c5d02c6dbaeae752811fd5893ad82ed4c49a9721f1c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:46:50 np0005548915 podman[121156]: 2025-12-06 09:46:50.724577467 +0000 UTC m=+0.157317118 container init 110de08b0faf0070bf966f79c685b3e90821d04d13d1192b43b0dcfdec88a2e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 04:46:50 np0005548915 podman[121156]: 2025-12-06 09:46:50.730383583 +0000 UTC m=+0.163123214 container start 110de08b0faf0070bf966f79c685b3e90821d04d13d1192b43b0dcfdec88a2e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Dec  6 04:46:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  6 04:46:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  6 04:46:50 np0005548915 bash[121156]: 110de08b0faf0070bf966f79c685b3e90821d04d13d1192b43b0dcfdec88a2e0
Dec  6 04:46:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  6 04:46:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  6 04:46:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  6 04:46:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  6 04:46:50 np0005548915 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:46:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  6 04:46:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:46:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:50] "GET /metrics HTTP/1.1" 200 48187 "" "Prometheus/2.51.0"
Dec  6 04:46:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:50] "GET /metrics HTTP/1.1" 200 48187 "" "Prometheus/2.51.0"
Dec  6 04:46:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:51.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:51.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:51 np0005548915 systemd[1]: session-42.scope: Deactivated successfully.
Dec  6 04:46:51 np0005548915 systemd[1]: session-42.scope: Consumed 25.642s CPU time.
Dec  6 04:46:51 np0005548915 systemd-logind[795]: Session 42 logged out. Waiting for processes to exit.
Dec  6 04:46:51 np0005548915 systemd-logind[795]: Removed session 42.
Dec  6 04:46:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v161: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:46:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:53.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:46:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:53.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:46:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v162: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:46:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:46:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:46:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:46:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:46:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:46:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:46:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:46:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:46:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:46:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:55.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:55.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v163: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:46:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:56 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:46:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:56 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:46:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:56 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  6 04:46:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:56.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:46:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:56.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:46:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:56.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:46:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:57.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:57.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:57 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v164: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 04:46:58 np0005548915 systemd-logind[795]: New session 43 of user zuul.
Dec  6 04:46:58 np0005548915 systemd[1]: Started Session 43 of User zuul.
Dec  6 04:46:58 np0005548915 python3.9[121376]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:46:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:46:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:46:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:59.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:46:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:46:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:46:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:59.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:46:59 np0005548915 python3.9[121530]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:46:59 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v165: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 04:47:00 np0005548915 python3.9[121608]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:00] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:47:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:00] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:47:00 np0005548915 systemd[1]: session-43.scope: Deactivated successfully.
Dec  6 04:47:00 np0005548915 systemd[1]: session-43.scope: Consumed 1.517s CPU time.
Dec  6 04:47:00 np0005548915 systemd-logind[795]: Session 43 logged out. Waiting for processes to exit.
Dec  6 04:47:00 np0005548915 systemd-logind[795]: Removed session 43.
Dec  6 04:47:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094700 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:47:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:00 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:47:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:00 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:47:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:00 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:47:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:01 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  6 04:47:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  6 04:47:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:01.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  6 04:47:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:47:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:01.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:47:01 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 04:47:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:47:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:03.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:47:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:03.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:03 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v167: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Dec  6 04:47:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:47:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:05.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:05.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v168: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Dec  6 04:47:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:05 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:47:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:05 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:47:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:05 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:47:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:06 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  6 04:47:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:06 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:47:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:06 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:47:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:06 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:47:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:06.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:47:07 np0005548915 systemd-logind[795]: New session 44 of user zuul.
Dec  6 04:47:07 np0005548915 systemd[1]: Started Session 44 of User zuul.
Dec  6 04:47:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:07.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:07.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 682 B/s wr, 3 op/s
Dec  6 04:47:08 np0005548915 python3.9[121819]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:47:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:47:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:47:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:47:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:09.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:09 np0005548915 python3.9[121977]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:09.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:09 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v170: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 1 op/s
Dec  6 04:47:10 np0005548915 python3.9[122152]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:10 np0005548915 python3.9[122230]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.fej78wwi recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:10] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 04:47:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:10] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 04:47:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:11.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:11.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:11 np0005548915 python3.9[122384]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v171: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 1 op/s
Dec  6 04:47:12 np0005548915 python3.9[122462]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.9w1ifd66 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:47:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:13 np0005548915 python3.9[122626]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:47:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:13.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:13 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:13.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:13 np0005548915 python3.9[122784]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec  6 04:47:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:47:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:14 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82c0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:14 np0005548915 python3.9[122862]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:47:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:14 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82b8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:14 np0005548915 python3.9[123014]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:15.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:15 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:15 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:47:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:15 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:47:15 np0005548915 python3.9[123094]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:47:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:15.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v173: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec  6 04:47:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094716 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:47:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:16 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:16 np0005548915 python3.9[123246]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:16 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82c00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:16.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:47:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:16.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:47:17 np0005548915 python3.9[123398]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:17.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:17 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:47:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:17.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:47:17 np0005548915 python3.9[123478]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:17 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v174: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec  6 04:47:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:18 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:18 np0005548915 python3.9[123630]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:18 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:47:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:18 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:47:19 np0005548915 python3.9[123708]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:19.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:19 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82c00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:19.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:19 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v175: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 04:47:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:20 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:20 np0005548915 python3.9[123862]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:47:20 np0005548915 systemd[1]: Reloading.
Dec  6 04:47:20 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:47:20 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:47:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:20] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 04:47:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:20] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 04:47:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:20 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d8002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094721 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:47:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:47:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:21.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:47:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:21 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:21 np0005548915 python3.9[124054]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:21.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:21 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v176: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 04:47:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:22 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82c00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:22 np0005548915 python3.9[124132]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:22 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:23 np0005548915 python3.9[124285]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:23.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:23 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d8002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:23.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:47:23
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'default.rgw.control', '.nfs', 'volumes', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', '.mgr']
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v177: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 04:47:23 np0005548915 python3.9[124364]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:47:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:47:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:47:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:47:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:47:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:47:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:47:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:47:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:47:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:24 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:47:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:47:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:47:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:47:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:47:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:24 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82c0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:25 np0005548915 python3.9[124516]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:47:25 np0005548915 systemd[1]: Reloading.
Dec  6 04:47:25 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:47:25 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:47:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:25 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82b8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:47:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:25.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:47:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:47:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:25.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:47:25 np0005548915 systemd[1]: Starting Create netns directory...
Dec  6 04:47:25 np0005548915 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  6 04:47:25 np0005548915 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  6 04:47:25 np0005548915 systemd[1]: Finished Create netns directory.
Dec  6 04:47:25 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v178: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec  6 04:47:26 np0005548915 kernel: ganesha.nfsd[122490]: segfault at 50 ip 00007f839032632e sp 00007f835a7fb210 error 4 in libntirpc.so.5.8[7f839030b000+2c000] likely on CPU 1 (core 0, socket 1)
Dec  6 04:47:26 np0005548915 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  6 04:47:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:26 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d8002910 fd 39 proxy ignored for local
Dec  6 04:47:26 np0005548915 systemd[1]: Started Process Core Dump (PID 124684/UID 0).
Dec  6 04:47:26 np0005548915 python3.9[124736]: ansible-ansible.builtin.service_facts Invoked
Dec  6 04:47:26 np0005548915 network[124753]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 04:47:26 np0005548915 network[124754]: 'network-scripts' will be removed from distribution in near future.
Dec  6 04:47:26 np0005548915 network[124755]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 04:47:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:26.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:47:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:26.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:47:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:47:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:27.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:47:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:47:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:27.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:47:27 np0005548915 systemd-coredump[124688]: Process 121175 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007f839032632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  6 04:47:27 np0005548915 systemd[1]: systemd-coredump@2-124684-0.service: Deactivated successfully.
Dec  6 04:47:27 np0005548915 systemd[1]: systemd-coredump@2-124684-0.service: Consumed 1.512s CPU time.
Dec  6 04:47:27 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v179: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec  6 04:47:27 np0005548915 podman[124798]: 2025-12-06 09:47:27.917091758 +0000 UTC m=+0.025247208 container died 110de08b0faf0070bf966f79c685b3e90821d04d13d1192b43b0dcfdec88a2e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:47:27 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7b803fd6ab17911ab7240c5d02c6dbaeae752811fd5893ad82ed4c49a9721f1c-merged.mount: Deactivated successfully.
Dec  6 04:47:27 np0005548915 podman[124798]: 2025-12-06 09:47:27.985998265 +0000 UTC m=+0.094153705 container remove 110de08b0faf0070bf966f79c685b3e90821d04d13d1192b43b0dcfdec88a2e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  6 04:47:27 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec  6 04:47:28 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec  6 04:47:28 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.781s CPU time.
Dec  6 04:47:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:47:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:29.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:29.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:29 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v180: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  6 04:47:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:30] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  6 04:47:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:30] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  6 04:47:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:31.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:31 np0005548915 python3.9[125071]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:31.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:31 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v181: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  6 04:47:31 np0005548915 python3.9[125149]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094732 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:47:32 np0005548915 python3.9[125301]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:33.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:33 np0005548915 python3.9[125455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:33.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:33 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v182: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  6 04:47:33 np0005548915 python3.9[125533]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:47:35 np0005548915 python3.9[125686]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  6 04:47:35 np0005548915 systemd[1]: Starting Time & Date Service...
Dec  6 04:47:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:35.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:35 np0005548915 systemd[1]: Started Time & Date Service.
Dec  6 04:47:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:35.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:35 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v183: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:47:36 np0005548915 python3.9[125843]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:36.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:47:36 np0005548915 python3.9[125995]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:37.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:37 np0005548915 python3.9[126075]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  6 04:47:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:37.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  6 04:47:37 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v184: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:47:38 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 3.
Dec  6 04:47:38 np0005548915 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:47:38 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.781s CPU time.
Dec  6 04:47:38 np0005548915 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:47:38 np0005548915 python3.9[126245]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:38 np0005548915 podman[126276]: 2025-12-06 09:47:38.4645514 +0000 UTC m=+0.107152256 container create 0680872db78f4539de9816e63fe0e26e1ab0f0389d421d932e29ec3f87531d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  6 04:47:38 np0005548915 podman[126276]: 2025-12-06 09:47:38.381809263 +0000 UTC m=+0.024410179 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:47:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dffa55875199467cd3d27a66b7cd46e7988a0483df9beb3d1dd985935856704/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dffa55875199467cd3d27a66b7cd46e7988a0483df9beb3d1dd985935856704/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dffa55875199467cd3d27a66b7cd46e7988a0483df9beb3d1dd985935856704/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dffa55875199467cd3d27a66b7cd46e7988a0483df9beb3d1dd985935856704/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:38 np0005548915 podman[126276]: 2025-12-06 09:47:38.889010815 +0000 UTC m=+0.531611761 container init 0680872db78f4539de9816e63fe0e26e1ab0f0389d421d932e29ec3f87531d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 04:47:38 np0005548915 podman[126276]: 2025-12-06 09:47:38.901351582 +0000 UTC m=+0.543952458 container start 0680872db78f4539de9816e63fe0e26e1ab0f0389d421d932e29ec3f87531d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  6 04:47:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:47:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:47:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:38 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  6 04:47:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:38 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  6 04:47:38 np0005548915 bash[126276]: 0680872db78f4539de9816e63fe0e26e1ab0f0389d421d932e29ec3f87531d86
Dec  6 04:47:38 np0005548915 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:47:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:47:39 np0005548915 python3.9[126371]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.vnmm999w recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  6 04:47:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  6 04:47:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  6 04:47:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  6 04:47:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  6 04:47:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:47:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:39.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:39.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:39 np0005548915 python3.9[126564]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:39 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v185: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:47:40 np0005548915 python3.9[126642]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:40] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec  6 04:47:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:40] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec  6 04:47:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:47:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:47:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:47:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:47:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:47:41 np0005548915 python3.9[126874]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:47:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:47:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:47:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:47:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:47:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:47:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:47:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:47:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:47:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:47:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:41.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:41.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:41 np0005548915 podman[127087]: 2025-12-06 09:47:41.697354143 +0000 UTC m=+0.031696572 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:47:41 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v186: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:47:41 np0005548915 podman[127087]: 2025-12-06 09:47:41.991707272 +0000 UTC m=+0.326049671 container create 6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:47:42 np0005548915 python3[127135]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  6 04:47:42 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:47:42 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:47:42 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:47:42 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:47:42 np0005548915 systemd[1]: Started libpod-conmon-6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13.scope.
Dec  6 04:47:42 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:47:42 np0005548915 python3.9[127292]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:42 np0005548915 podman[127087]: 2025-12-06 09:47:42.843933677 +0000 UTC m=+1.178276096 container init 6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:47:42 np0005548915 podman[127087]: 2025-12-06 09:47:42.852902755 +0000 UTC m=+1.187245154 container start 6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:47:42 np0005548915 tender_volhard[127162]: 167 167
Dec  6 04:47:42 np0005548915 systemd[1]: libpod-6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13.scope: Deactivated successfully.
Dec  6 04:47:42 np0005548915 podman[127087]: 2025-12-06 09:47:42.962415743 +0000 UTC m=+1.296758142 container attach 6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  6 04:47:42 np0005548915 podman[127087]: 2025-12-06 09:47:42.962880666 +0000 UTC m=+1.297223065 container died 6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_volhard, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  6 04:47:43 np0005548915 python3.9[127383]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:43 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7463cabeea252a08593052d1e73817ccc6d0aa370dbaee0066e4e051a28fb62e-merged.mount: Deactivated successfully.
Dec  6 04:47:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:43.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:43.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:43 np0005548915 podman[127087]: 2025-12-06 09:47:43.598834786 +0000 UTC m=+1.933177206 container remove 6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  6 04:47:43 np0005548915 systemd[1]: libpod-conmon-6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13.scope: Deactivated successfully.
Dec  6 04:47:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 04:47:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2525 writes, 11K keys, 2524 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2525 writes, 2524 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2525 writes, 11K keys, 2524 commit groups, 1.0 writes per commit group, ingest: 23.56 MB, 0.04 MB/s#012Interval WAL: 2525 writes, 2524 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     78.5      0.24              0.04         4    0.061       0      0       0.0       0.0#012  L6      1/0   14.11 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.0     88.0     78.1      0.50              0.12         3    0.165     12K   1351       0.0       0.0#012 Sum      1/0   14.11 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.0     59.1     78.2      0.74              0.15         7    0.105     12K   1351       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.0     59.7     78.9      0.73              0.15         6    0.122     12K   1351       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     88.0     78.1      0.50              0.12         3    0.165     12K   1351       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     80.6      0.24              0.04         3    0.078       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.019, interval 0.019#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.10 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.7 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fd9a571350#2 capacity: 304.00 MB usage: 1.12 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(72,1006.20 KB,0.32323%) FilterBlock(8,43.55 KB,0.0139889%) IndexBlock(8,94.20 KB,0.0302616%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 04:47:43 np0005548915 podman[127496]: 2025-12-06 09:47:43.76349623 +0000 UTC m=+0.032453453 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:47:43 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v187: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec  6 04:47:43 np0005548915 podman[127496]: 2025-12-06 09:47:43.892125507 +0000 UTC m=+0.161082690 container create b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:47:44 np0005548915 python3.9[127562]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:47:44 np0005548915 systemd[1]: Started libpod-conmon-b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f.scope.
Dec  6 04:47:44 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:47:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628d9da1577c2dfeee664934590a86b9e248a84d9d43466c135ddf9905109fb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628d9da1577c2dfeee664934590a86b9e248a84d9d43466c135ddf9905109fb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628d9da1577c2dfeee664934590a86b9e248a84d9d43466c135ddf9905109fb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628d9da1577c2dfeee664934590a86b9e248a84d9d43466c135ddf9905109fb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628d9da1577c2dfeee664934590a86b9e248a84d9d43466c135ddf9905109fb1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:44 np0005548915 python3.9[127644]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:45.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:45.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:45 np0005548915 podman[127496]: 2025-12-06 09:47:45.762873853 +0000 UTC m=+2.031831086 container init b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_villani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:47:45 np0005548915 podman[127496]: 2025-12-06 09:47:45.775117388 +0000 UTC m=+2.044074571 container start b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:47:45 np0005548915 python3.9[127798]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:45 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v188: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  6 04:47:46 np0005548915 nifty_villani[127642]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:47:46 np0005548915 nifty_villani[127642]: --> All data devices are unavailable
Dec  6 04:47:46 np0005548915 podman[127496]: 2025-12-06 09:47:46.146533153 +0000 UTC m=+2.415490336 container attach b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_villani, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  6 04:47:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:46 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:47:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:46 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:47:46 np0005548915 podman[127496]: 2025-12-06 09:47:46.180652229 +0000 UTC m=+2.449609422 container died b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  6 04:47:46 np0005548915 systemd[1]: libpod-b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f.scope: Deactivated successfully.
Dec  6 04:47:46 np0005548915 systemd[1]: var-lib-containers-storage-overlay-628d9da1577c2dfeee664934590a86b9e248a84d9d43466c135ddf9905109fb1-merged.mount: Deactivated successfully.
Dec  6 04:47:46 np0005548915 podman[127496]: 2025-12-06 09:47:46.437723797 +0000 UTC m=+2.706681000 container remove b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 04:47:46 np0005548915 python3.9[127923]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:46 np0005548915 systemd[1]: libpod-conmon-b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f.scope: Deactivated successfully.
Dec  6 04:47:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:46.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:47:47 np0005548915 podman[128168]: 2025-12-06 09:47:47.015802511 +0000 UTC m=+0.048467518 container create 466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  6 04:47:47 np0005548915 podman[128168]: 2025-12-06 09:47:46.991364841 +0000 UTC m=+0.024029848 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:47:47 np0005548915 python3.9[128167]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:47 np0005548915 systemd[1]: Started libpod-conmon-466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf.scope.
Dec  6 04:47:47 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:47:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:47.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:47 np0005548915 systemd[1]: session-18.scope: Deactivated successfully.
Dec  6 04:47:47 np0005548915 systemd[1]: session-18.scope: Consumed 1min 46.097s CPU time.
Dec  6 04:47:47 np0005548915 systemd-logind[795]: Session 18 logged out. Waiting for processes to exit.
Dec  6 04:47:47 np0005548915 systemd-logind[795]: Removed session 18.
Dec  6 04:47:47 np0005548915 podman[128168]: 2025-12-06 09:47:47.461761395 +0000 UTC m=+0.494426442 container init 466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:47:47 np0005548915 podman[128168]: 2025-12-06 09:47:47.472232604 +0000 UTC m=+0.504897591 container start 466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  6 04:47:47 np0005548915 podman[128168]: 2025-12-06 09:47:47.475765727 +0000 UTC m=+0.508430784 container attach 466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:47:47 np0005548915 relaxed_bartik[128196]: 167 167
Dec  6 04:47:47 np0005548915 systemd[1]: libpod-466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf.scope: Deactivated successfully.
Dec  6 04:47:47 np0005548915 conmon[128196]: conmon 466d1754445ce036bc9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf.scope/container/memory.events
Dec  6 04:47:47 np0005548915 podman[128168]: 2025-12-06 09:47:47.484593591 +0000 UTC m=+0.517258578 container died 466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  6 04:47:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:47.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:47 np0005548915 python3.9[128266]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:47 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a801b94a3e271abf75f68e437b5bc99cc416f1baeabcc0a5cdc1d36bd508898b-merged.mount: Deactivated successfully.
Dec  6 04:47:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v189: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:47:48 np0005548915 podman[128168]: 2025-12-06 09:47:48.035219956 +0000 UTC m=+1.067884943 container remove 466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  6 04:47:48 np0005548915 systemd[1]: libpod-conmon-466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf.scope: Deactivated successfully.
Dec  6 04:47:48 np0005548915 podman[128387]: 2025-12-06 09:47:48.20482113 +0000 UTC m=+0.043426763 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:47:48 np0005548915 python3.9[128453]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:47:48 np0005548915 podman[128387]: 2025-12-06 09:47:48.824657424 +0000 UTC m=+0.663263067 container create 7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  6 04:47:48 np0005548915 systemd[1]: Started libpod-conmon-7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb.scope.
Dec  6 04:47:48 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:47:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c58fcc2e4b9bc9808daacedb4ad67c0c4189a6d59019690927c7c85f27b93d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c58fcc2e4b9bc9808daacedb4ad67c0c4189a6d59019690927c7c85f27b93d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c58fcc2e4b9bc9808daacedb4ad67c0c4189a6d59019690927c7c85f27b93d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c58fcc2e4b9bc9808daacedb4ad67c0c4189a6d59019690927c7c85f27b93d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:48 np0005548915 podman[128387]: 2025-12-06 09:47:48.950213568 +0000 UTC m=+0.788819211 container init 7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:47:48 np0005548915 podman[128387]: 2025-12-06 09:47:48.969011167 +0000 UTC m=+0.807616780 container start 7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:47:48 np0005548915 podman[128387]: 2025-12-06 09:47:48.973635751 +0000 UTC m=+0.812241384 container attach 7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:47:49 np0005548915 python3.9[128531]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]: {
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:    "1": [
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:        {
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:            "devices": [
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:                "/dev/loop3"
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:            ],
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:            "lv_name": "ceph_lv0",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:            "lv_size": "21470642176",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:            "name": "ceph_lv0",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:            "tags": {
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:                "ceph.cluster_name": "ceph",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:                "ceph.crush_device_class": "",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:                "ceph.encrypted": "0",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:                "ceph.osd_id": "1",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:                "ceph.type": "block",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:                "ceph.vdo": "0",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:                "ceph.with_tpm": "0"
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:            },
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:            "type": "block",
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:            "vg_name": "ceph_vg0"
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:        }
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]:    ]
Dec  6 04:47:49 np0005548915 infallible_torvalds[128534]: }
Dec  6 04:47:49 np0005548915 systemd[1]: libpod-7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb.scope: Deactivated successfully.
Dec  6 04:47:49 np0005548915 podman[128387]: 2025-12-06 09:47:49.306406709 +0000 UTC m=+1.145012312 container died 7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.374080) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014469374318, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1369, "num_deletes": 250, "total_data_size": 2553782, "memory_usage": 2594920, "flush_reason": "Manual Compaction"}
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014469391926, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 1471945, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10944, "largest_seqno": 12312, "table_properties": {"data_size": 1467153, "index_size": 2188, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12146, "raw_average_key_size": 20, "raw_value_size": 1456834, "raw_average_value_size": 2407, "num_data_blocks": 97, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765014324, "oldest_key_time": 1765014324, "file_creation_time": 1765014469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17899 microseconds, and 5249 cpu microseconds.
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 04:47:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:49.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.391979) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 1471945 bytes OK
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.392000) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.476564) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.476614) EVENT_LOG_v1 {"time_micros": 1765014469476604, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.476636) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2547897, prev total WAL file size 2547897, number of live WAL files 2.
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.477675) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(1437KB)], [26(14MB)]
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014469477749, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16265065, "oldest_snapshot_seqno": -1}
Dec  6 04:47:49 np0005548915 systemd[1]: var-lib-containers-storage-overlay-68c58fcc2e4b9bc9808daacedb4ad67c0c4189a6d59019690927c7c85f27b93d-merged.mount: Deactivated successfully.
Dec  6 04:47:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:49.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4278 keys, 14215627 bytes, temperature: kUnknown
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014469686828, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14215627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14182730, "index_size": 21075, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 108743, "raw_average_key_size": 25, "raw_value_size": 14100334, "raw_average_value_size": 3296, "num_data_blocks": 902, "num_entries": 4278, "num_filter_entries": 4278, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765014469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.687860) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14215627 bytes
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.689938) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 77.8 rd, 68.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 14.1 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(20.7) write-amplify(9.7) OK, records in: 4727, records dropped: 449 output_compression: NoCompression
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.690376) EVENT_LOG_v1 {"time_micros": 1765014469690165, "job": 10, "event": "compaction_finished", "compaction_time_micros": 209146, "compaction_time_cpu_micros": 28449, "output_level": 6, "num_output_files": 1, "total_output_size": 14215627, "num_input_records": 4727, "num_output_records": 4278, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014469692463, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014469696756, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.477579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.696946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.696953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.696955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.696957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:47:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.696959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:47:49 np0005548915 podman[128387]: 2025-12-06 09:47:49.815944902 +0000 UTC m=+1.654550535 container remove 7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  6 04:47:49 np0005548915 systemd[1]: libpod-conmon-7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb.scope: Deactivated successfully.
Dec  6 04:47:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v190: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:47:50 np0005548915 python3.9[128707]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:47:50 np0005548915 podman[128877]: 2025-12-06 09:47:50.332615005 +0000 UTC m=+0.049064465 container create 8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_matsumoto, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:47:50 np0005548915 systemd[1]: Started libpod-conmon-8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6.scope.
Dec  6 04:47:50 np0005548915 podman[128877]: 2025-12-06 09:47:50.304925819 +0000 UTC m=+0.021375289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:47:50 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:47:50 np0005548915 podman[128877]: 2025-12-06 09:47:50.424635409 +0000 UTC m=+0.141084879 container init 8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:47:50 np0005548915 podman[128877]: 2025-12-06 09:47:50.433561126 +0000 UTC m=+0.150010586 container start 8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_matsumoto, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  6 04:47:50 np0005548915 podman[128877]: 2025-12-06 09:47:50.437217442 +0000 UTC m=+0.153667293 container attach 8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:47:50 np0005548915 quizzical_matsumoto[128894]: 167 167
Dec  6 04:47:50 np0005548915 systemd[1]: libpod-8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6.scope: Deactivated successfully.
Dec  6 04:47:50 np0005548915 conmon[128894]: conmon 8387ca743f1ee6ef0789 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6.scope/container/memory.events
Dec  6 04:47:50 np0005548915 podman[128877]: 2025-12-06 09:47:50.440923082 +0000 UTC m=+0.157372532 container died 8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:47:50 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c4fb1eef263ac1a726038f5573905885962a4783c6a5f65d6d1908503c2c51dd-merged.mount: Deactivated successfully.
Dec  6 04:47:50 np0005548915 podman[128877]: 2025-12-06 09:47:50.485612649 +0000 UTC m=+0.202062109 container remove 8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:47:50 np0005548915 systemd[1]: libpod-conmon-8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6.scope: Deactivated successfully.
Dec  6 04:47:50 np0005548915 podman[128971]: 2025-12-06 09:47:50.678544792 +0000 UTC m=+0.038102723 container create 5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 04:47:50 np0005548915 systemd[1]: Started libpod-conmon-5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578.scope.
Dec  6 04:47:50 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:47:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7947471b611962366e894159bd819bdc6a97f17fc343d785926d6b0ef2a943d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7947471b611962366e894159bd819bdc6a97f17fc343d785926d6b0ef2a943d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7947471b611962366e894159bd819bdc6a97f17fc343d785926d6b0ef2a943d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7947471b611962366e894159bd819bdc6a97f17fc343d785926d6b0ef2a943d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:47:50 np0005548915 podman[128971]: 2025-12-06 09:47:50.660823422 +0000 UTC m=+0.020381383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:47:50 np0005548915 podman[128971]: 2025-12-06 09:47:50.764460854 +0000 UTC m=+0.124018795 container init 5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mestorf, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:47:50 np0005548915 podman[128971]: 2025-12-06 09:47:50.774926902 +0000 UTC m=+0.134484833 container start 5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Dec  6 04:47:50 np0005548915 podman[128971]: 2025-12-06 09:47:50.777803249 +0000 UTC m=+0.137361180 container attach 5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mestorf, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  6 04:47:50 np0005548915 python3.9[129007]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:50 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 04:47:50 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 04:47:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:50] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec  6 04:47:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:50] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec  6 04:47:51 np0005548915 lvm[129210]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:47:51 np0005548915 lvm[129210]: VG ceph_vg0 finished
Dec  6 04:47:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:51.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:51 np0005548915 bold_mestorf[129010]: {}
Dec  6 04:47:51 np0005548915 systemd[1]: libpod-5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578.scope: Deactivated successfully.
Dec  6 04:47:51 np0005548915 podman[128971]: 2025-12-06 09:47:51.515974424 +0000 UTC m=+0.875532355 container died 5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:47:51 np0005548915 systemd[1]: libpod-5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578.scope: Consumed 1.107s CPU time.
Dec  6 04:47:51 np0005548915 systemd[1]: var-lib-containers-storage-overlay-b7947471b611962366e894159bd819bdc6a97f17fc343d785926d6b0ef2a943d-merged.mount: Deactivated successfully.
Dec  6 04:47:51 np0005548915 podman[128971]: 2025-12-06 09:47:51.562595543 +0000 UTC m=+0.922153474 container remove 5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mestorf, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  6 04:47:51 np0005548915 systemd[1]: libpod-conmon-5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578.scope: Deactivated successfully.
Dec  6 04:47:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:51.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:47:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:47:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:47:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:47:51 np0005548915 python3.9[129242]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 04:47:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd034000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:52 np0005548915 python3.9[129433]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:47:52 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:47:52 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:47:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:53 np0005548915 python3.9[129601]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  6 04:47:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:53.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:53.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Dec  6 04:47:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:47:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:47:53 np0005548915 python3.9[129754]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  6 04:47:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:47:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:47:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:47:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:47:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:47:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:47:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094754 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:47:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:54 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:47:54 np0005548915 systemd[1]: session-44.scope: Deactivated successfully.
Dec  6 04:47:54 np0005548915 systemd[1]: session-44.scope: Consumed 32.293s CPU time.
Dec  6 04:47:54 np0005548915 systemd-logind[795]: Session 44 logged out. Waiting for processes to exit.
Dec  6 04:47:54 np0005548915 systemd-logind[795]: Removed session 44.
Dec  6 04:47:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:55.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:55.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec  6 04:47:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:56 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:56.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:47:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:56.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:47:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:57.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:57.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:57 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec  6 04:47:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:58 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:47:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:47:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:47:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:59.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:47:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:47:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:47:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:59.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:47:59 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:48:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:00 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:00 np0005548915 systemd-logind[795]: New session 45 of user zuul.
Dec  6 04:48:00 np0005548915 systemd[1]: Started Session 45 of User zuul.
Dec  6 04:48:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:00] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 04:48:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:00] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 04:48:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:01.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:48:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:01.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:48:01 np0005548915 python3.9[129942]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  6 04:48:01 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:48:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:02 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:02 np0005548915 python3.9[130094]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:48:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:03 np0005548915 python3.9[130249]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec  6 04:48:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:48:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:03.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:48:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:03.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:03 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:48:03 np0005548915 python3.9[130402]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.hbyixhmr follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:48:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:04 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:48:04 np0005548915 python3.9[130527]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.hbyixhmr mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014483.4988-102-145352204778336/.source.hbyixhmr _original_basename=.8ex5zv4n follow=False checksum=741dc69011fb61b699872c865e152b9968457717 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:48:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:05.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:05 np0005548915 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  6 04:48:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:05.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:05 np0005548915 python3.9[130681]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:48:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:06 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:06 np0005548915 python3.9[130860]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDneZurSARwLaZA1xEymzXlvVAPvP8u0PCrqXuMYD5ewImDDChRITnk4XHKT/DUfrSJf9/7oJsddEbLRjhCtedqrMZsCkWz1BxtCmPBuvz2LfFhEn27TjqYLctOVGigQGsj6ILvPOzzLiapd93yApWDmH6P0un/ltmdM0iZLygNpzG3HLF8STBXzlo/8slci69Em7XppcrOpl1TS7DaVlpNcRQvo9pFuIrbMD9g0DOdMwk5YCH6g7OzGWqq0gt0YUOztmsqxWHKav3E0SXAD/vkgRc/1ZCNGFNSvf0dIgimCF3xlNWrppnvNgQ1BRqiQ7RArlOp1bVg0Ugdce6f4TIrq36Ois2U5+/myF5WQ7l9hRMRvoP64hSSsRAIDobTI/zMStUP3iZPFngxDxwQtpydHfFGywBL9811c42U7JsGxE8890uOIDk/oOkyhSH6KHQCPFjmKBJ98nT01lgnXyFSNOqds6QOYBasUWNFWd2wS7YpTheGlVVM8bk/gB4K2L0=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMkn8zp09tRuEaH/bUoP0rYj+dziM1KcqMKxOgM9K1U#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCrMdvJJYP0cflC7RDFsxwr66nSp9R7QU726CAfJcKLw6vHh8Z9Lw5wLH0kiaSpsb6SAPffloplHEDiwTOkghOc=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAiB67qk/R3IfGpcAH1Ojopc8KX94De+Kxs31cKQLD04X+4QRXPRdMxU85LOhN58eKoHaBi8cgqk7+dvRypGD5vbtbRN9r0VN7tGwiSQTlVFbEuhn0AEbnRwNAMWEEMHO9kEjufP4N2zEEhtQBXy9oO2tMX3+BX4Z3YZZMQyZUgohdBHp2VCul9VdRuo0oHSr8HHm0nN61dMjalnThmgkGAu5hG8qhkWT4i9hroSKBsR5kVBUFTqdXekYkVy4YIYfM2lBXiMOFHtvr1a+KOyIfgWMb7GBPW7oKqtzCfVgSbGaUhSvGzs1OWt3U/PjjapIlmDnwD5ukzVxWV5ldh0vA48tXh5R1wqAoN5/Y/RiAKaY2kd/fvtkhvVDGZluXOz5jJ02IFHm+v4dP3Ig8YOuS5BEkWFuJHkblW0t/+4siTHWwmGEuvUI6y8Gb2pGcBKsWCJtLePYzT09IAmrjwO0jAgbWy0nvCZ+SKlbBBrXP6OgNgMkA+GH9iGOl6FOuRok=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGYNj3LmNvR0emoQHuuy9NKXPivs/dznunVy8GExnJl8#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJhKmGSvg8FMw16qKPzk6Pyj+OHkN3bmk20mts1PdCRcNRnn9sT1DgI6U8Aze1tjGPujT4eDL+Y9r/hsrfM4qDc=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtvqYC0W0zPSX/plyJvm0q1VGDScYTNlcCdllukOe81JRfU3GhVusPZOX0xRSaLP/lmXtfqWcbBRCkLsmFrAo2EHn1CMqMr5WkhY4+rgApF+MGLDOUo57tlKZLPIwdL0SSY/Qv8lBfrqr7LUDZ7fTTTbqTzim/bncxg/u0KxSWBdvjfmYi13SwO65wDkFqSVYa3h8DNij6cRRjQ0fJuJ9Da860hmMnqo9GJMU6dq3zMXXn3YfuF4E4M0UQdlWmVW4EwBTzsfA1XYbSpW7VdRJw6esB4vZ9/Succj+XZiANoDqL9gXSEjNXVVWVbL/7aGJJF9LLQ3VVxmHdbYs1NcTI6Yy9d61zDJHnK/nlYHMhmAHxiDsZEpv0xF72LLzaI86xxvnbx4eUpnyW6LnKiUCYUAUrWIMpLiIbWUxeIoYmj9rqLhwlo5kCy7WdCYYEMTtGI53oIyU0EbXf/r4WAuzmqpVRPyc2Sd5tYD4aXh1JZLUcZy+NLR0Y4SA8RflKFcs=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFDJYF6pUvFgGUbY2QEOHAq7ZEhRQJUqPTVPOuTyb476#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPJ19afQPeSMtr3O9L1fe5+bNzTAsOOCA5fLihUdryDYc29KKD+0XABHKIvqeefcCsIBjZRA//9OzCUftfvXK9A=#012 create=True mode=0644 path=/tmp/ansible.hbyixhmr state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:48:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:06.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:48:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:48:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:07.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:48:07 np0005548915 python3.9[131014]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.hbyixhmr' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:48:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:48:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:07.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:48:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:08 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:08 np0005548915 python3.9[131168]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.hbyixhmr state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:48:08 np0005548915 systemd-logind[795]: Session 45 logged out. Waiting for processes to exit.
Dec  6 04:48:08 np0005548915 systemd[1]: session-45.scope: Deactivated successfully.
Dec  6 04:48:08 np0005548915 systemd[1]: session-45.scope: Consumed 5.630s CPU time.
Dec  6 04:48:08 np0005548915 systemd-logind[795]: Removed session 45.
Dec  6 04:48:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:48:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:48:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:48:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:48:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:09.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:48:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:09.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:09 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:10 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:10] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  6 04:48:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:10] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  6 04:48:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:11.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:11.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:12 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:13.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:13.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:48:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:14 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:48:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:15.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:15.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:16 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:16.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:48:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:17.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:48:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:17.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:48:17 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:18 np0005548915 systemd-logind[795]: New session 46 of user zuul.
Dec  6 04:48:18 np0005548915 systemd[1]: Started Session 46 of User zuul.
Dec  6 04:48:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:18 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:19 np0005548915 python3.9[131356]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:48:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:48:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:48:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:19.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:48:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:48:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:19.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:48:19 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:20 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:20 np0005548915 python3.9[131514]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  6 04:48:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:20] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  6 04:48:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:20] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  6 04:48:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:21.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:21 np0005548915 python3.9[131670]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:48:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:21.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:21 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:22 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:22 np0005548915 python3.9[131823]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:48:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:48:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:23.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:48:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:23.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:23 np0005548915 python3.9[131978]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:48:23
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'volumes', '.mgr', '.nfs', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.data']
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:48:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:48:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:48:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:48:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:48:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:48:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:48:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:48:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:48:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:24 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:48:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:48:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:48:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:48:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:48:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:48:24 np0005548915 python3.9[132132]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:48:24 np0005548915 systemd-logind[795]: Session 46 logged out. Waiting for processes to exit.
Dec  6 04:48:24 np0005548915 systemd[1]: session-46.scope: Deactivated successfully.
Dec  6 04:48:24 np0005548915 systemd[1]: session-46.scope: Consumed 4.387s CPU time.
Dec  6 04:48:24 np0005548915 systemd-logind[795]: Removed session 46.
Dec  6 04:48:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:25.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:25.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:25 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:26 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec  6 04:48:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:26.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:48:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:26.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:48:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:26.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:48:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:27.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:48:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:27.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:48:27 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:28 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:48:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:48:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:29.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:48:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:48:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:29.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:48:29 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:30 np0005548915 systemd-logind[795]: New session 47 of user zuul.
Dec  6 04:48:30 np0005548915 systemd[1]: Started Session 47 of User zuul.
Dec  6 04:48:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:30] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  6 04:48:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:30] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  6 04:48:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:31 np0005548915 python3.9[132341]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:48:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:31.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:31.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:31 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:32 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:32 np0005548915 python3.9[132499]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:48:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:33 np0005548915 python3.9[132584]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  6 04:48:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:33.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:48:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:33.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:48:33 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:48:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:34 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:48:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:35 np0005548915 python3.9[132738]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:48:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:35.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:48:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:35.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:48:35 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:36 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:36 np0005548915 python3.9[132889]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  6 04:48:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:36.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:48:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:37.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:48:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:37.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:48:37 np0005548915 python3.9[133041]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:48:37 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:38 np0005548915 python3.9[133191]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:48:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:38 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:38 np0005548915 systemd[1]: session-47.scope: Deactivated successfully.
Dec  6 04:48:38 np0005548915 systemd[1]: session-47.scope: Consumed 6.060s CPU time.
Dec  6 04:48:38 np0005548915 systemd-logind[795]: Session 47 logged out. Waiting for processes to exit.
Dec  6 04:48:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:48:38 np0005548915 systemd-logind[795]: Removed session 47.
Dec  6 04:48:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:48:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:48:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:39.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:39.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:39 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:40 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:40] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  6 04:48:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:40] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  6 04:48:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:41.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:41.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:41 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:42 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:43.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:48:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:43.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:48:43 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:48:44 np0005548915 systemd-logind[795]: New session 48 of user zuul.
Dec  6 04:48:44 np0005548915 systemd[1]: Started Session 48 of User zuul.
Dec  6 04:48:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:44 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:48:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:45 np0005548915 python3.9[133375]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:48:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:48:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:45.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:48:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:48:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:45.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:48:45 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:46 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:46 np0005548915 python3.9[133560]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:48:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:46.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:48:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:46.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:48:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:46.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:48:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:47 np0005548915 python3.9[133714]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:48:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:48:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:47.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:48:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:47.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:48 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:48 np0005548915 python3.9[133866]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:48:49 np0005548915 python3.9[133989]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014527.7212272-155-115046506001513/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4972a5b4767763bd2b83e4da30fd5d4465a5d407 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:48:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:48:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:49.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:49 np0005548915 python3.9[134143]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:48:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:48:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:49.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:48:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:50 np0005548915 python3.9[134266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014529.2117805-155-122908161432421/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=f805cc6455e59702aa77bd6ffe81bb9b155b0be7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:48:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:50 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:50 np0005548915 python3.9[134418]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:48:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:50] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  6 04:48:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:50] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  6 04:48:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:51 np0005548915 python3.9[134542]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014530.3228693-155-204436662716346/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=fc784aa4b08f164441f6f4f35eca9daa081a5501 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:48:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:48:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:51.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:48:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:48:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:51.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:48:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:52 np0005548915 python3.9[134695]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:48:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300027d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:48:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:48:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:48:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:48:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:52 np0005548915 python3.9[134942]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:48:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:48:53 np0005548915 python3.9[135174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:48:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:53.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:53 np0005548915 podman[135312]: 2025-12-06 09:48:53.663363893 +0000 UTC m=+0.056433235 container create 7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yalow, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  6 04:48:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:53.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:53 np0005548915 systemd[1]: Started libpod-conmon-7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9.scope.
Dec  6 04:48:53 np0005548915 podman[135312]: 2025-12-06 09:48:53.636760564 +0000 UTC m=+0.029829986 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:48:53 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:48:53 np0005548915 podman[135312]: 2025-12-06 09:48:53.764420043 +0000 UTC m=+0.157489405 container init 7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:48:53 np0005548915 podman[135312]: 2025-12-06 09:48:53.77098247 +0000 UTC m=+0.164051812 container start 7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yalow, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:48:53 np0005548915 podman[135312]: 2025-12-06 09:48:53.773812826 +0000 UTC m=+0.166882168 container attach 7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yalow, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:48:53 np0005548915 brave_yalow[135381]: 167 167
Dec  6 04:48:53 np0005548915 systemd[1]: libpod-7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9.scope: Deactivated successfully.
Dec  6 04:48:53 np0005548915 podman[135312]: 2025-12-06 09:48:53.777367153 +0000 UTC m=+0.170436495 container died 7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yalow, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  6 04:48:53 np0005548915 systemd[1]: var-lib-containers-storage-overlay-afc5d14b32605f51a550e429c1d33fdaa9db3409f4b6cedb2293dfcc9fc80028-merged.mount: Deactivated successfully.
Dec  6 04:48:53 np0005548915 podman[135312]: 2025-12-06 09:48:53.815303977 +0000 UTC m=+0.208373319 container remove 7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:48:53 np0005548915 systemd[1]: libpod-conmon-7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9.scope: Deactivated successfully.
Dec  6 04:48:53 np0005548915 python3.9[135383]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014532.938963-331-270173198636715/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=37e9f8032863405665c1a6629c82ece5be598bf6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:48:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:48:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:48:53 np0005548915 podman[135407]: 2025-12-06 09:48:53.958586548 +0000 UTC m=+0.044332179 container create a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jang, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:48:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:48:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:48:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:48:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:48:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:48:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:48:53 np0005548915 systemd[1]: Started libpod-conmon-a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa.scope.
Dec  6 04:48:54 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:48:54 np0005548915 podman[135407]: 2025-12-06 09:48:53.939077751 +0000 UTC m=+0.024823432 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:48:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052c39c1e5a36cf7eaecc5fe785e2ccdb96f99911f602693531084648588171e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:48:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052c39c1e5a36cf7eaecc5fe785e2ccdb96f99911f602693531084648588171e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:48:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052c39c1e5a36cf7eaecc5fe785e2ccdb96f99911f602693531084648588171e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:48:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052c39c1e5a36cf7eaecc5fe785e2ccdb96f99911f602693531084648588171e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:48:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052c39c1e5a36cf7eaecc5fe785e2ccdb96f99911f602693531084648588171e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:48:54 np0005548915 podman[135407]: 2025-12-06 09:48:54.047712355 +0000 UTC m=+0.133458006 container init a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  6 04:48:54 np0005548915 podman[135407]: 2025-12-06 09:48:54.059617897 +0000 UTC m=+0.145363528 container start a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jang, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:48:54 np0005548915 podman[135407]: 2025-12-06 09:48:54.070706226 +0000 UTC m=+0.156451857 container attach a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jang, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  6 04:48:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:54 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:48:54 np0005548915 blissful_jang[135447]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:48:54 np0005548915 blissful_jang[135447]: --> All data devices are unavailable
Dec  6 04:48:54 np0005548915 podman[135407]: 2025-12-06 09:48:54.411692297 +0000 UTC m=+0.497437938 container died a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:48:54 np0005548915 systemd[1]: libpod-a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa.scope: Deactivated successfully.
Dec  6 04:48:54 np0005548915 systemd[1]: var-lib-containers-storage-overlay-052c39c1e5a36cf7eaecc5fe785e2ccdb96f99911f602693531084648588171e-merged.mount: Deactivated successfully.
Dec  6 04:48:54 np0005548915 podman[135407]: 2025-12-06 09:48:54.459442897 +0000 UTC m=+0.545188528 container remove a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jang, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:48:54 np0005548915 systemd[1]: libpod-conmon-a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa.scope: Deactivated successfully.
Dec  6 04:48:54 np0005548915 python3.9[135587]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:48:55 np0005548915 podman[135818]: 2025-12-06 09:48:55.002985269 +0000 UTC m=+0.044745170 container create ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ritchie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:48:55 np0005548915 systemd[1]: Started libpod-conmon-ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555.scope.
Dec  6 04:48:55 np0005548915 python3.9[135805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014534.0829535-331-11209439750338/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=72139a22070e52361b83b34c98df3f4b6e2a8fd5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:48:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300027d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:55 np0005548915 podman[135818]: 2025-12-06 09:48:54.982004323 +0000 UTC m=+0.023764254 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:48:55 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:48:55 np0005548915 podman[135818]: 2025-12-06 09:48:55.105410026 +0000 UTC m=+0.147169957 container init ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ritchie, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  6 04:48:55 np0005548915 podman[135818]: 2025-12-06 09:48:55.111708236 +0000 UTC m=+0.153468177 container start ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ritchie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:48:55 np0005548915 podman[135818]: 2025-12-06 09:48:55.115435626 +0000 UTC m=+0.157195557 container attach ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ritchie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  6 04:48:55 np0005548915 sweet_ritchie[135835]: 167 167
Dec  6 04:48:55 np0005548915 systemd[1]: libpod-ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555.scope: Deactivated successfully.
Dec  6 04:48:55 np0005548915 podman[135818]: 2025-12-06 09:48:55.116816954 +0000 UTC m=+0.158576865 container died ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:48:55 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6e18f6b41bc202354ddee8a8299af53146a349f7275517390b1e2d227808f4a4-merged.mount: Deactivated successfully.
Dec  6 04:48:55 np0005548915 podman[135818]: 2025-12-06 09:48:55.155274282 +0000 UTC m=+0.197034193 container remove ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:48:55 np0005548915 systemd[1]: libpod-conmon-ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555.scope: Deactivated successfully.
Dec  6 04:48:55 np0005548915 podman[135917]: 2025-12-06 09:48:55.299356735 +0000 UTC m=+0.045109050 container create 4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:48:55 np0005548915 systemd[1]: Started libpod-conmon-4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d.scope.
Dec  6 04:48:55 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:48:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c14c41c9117632a778fc33806d1c232821398b0f48520b4949428e6246bb207/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:48:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c14c41c9117632a778fc33806d1c232821398b0f48520b4949428e6246bb207/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:48:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c14c41c9117632a778fc33806d1c232821398b0f48520b4949428e6246bb207/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:48:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c14c41c9117632a778fc33806d1c232821398b0f48520b4949428e6246bb207/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:48:55 np0005548915 podman[135917]: 2025-12-06 09:48:55.282764936 +0000 UTC m=+0.028517281 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:48:55 np0005548915 podman[135917]: 2025-12-06 09:48:55.374862274 +0000 UTC m=+0.120614609 container init 4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:48:55 np0005548915 podman[135917]: 2025-12-06 09:48:55.38062291 +0000 UTC m=+0.126375225 container start 4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_beaver, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:48:55 np0005548915 podman[135917]: 2025-12-06 09:48:55.384401171 +0000 UTC m=+0.130153516 container attach 4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_beaver, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:48:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:48:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:55.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:48:55 np0005548915 python3.9[136034]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:48:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:55.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:55 np0005548915 eager_beaver[135977]: {
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:    "1": [
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:        {
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:            "devices": [
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:                "/dev/loop3"
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:            ],
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:            "lv_name": "ceph_lv0",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:            "lv_size": "21470642176",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:            "name": "ceph_lv0",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:            "tags": {
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:                "ceph.cluster_name": "ceph",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:                "ceph.crush_device_class": "",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:                "ceph.encrypted": "0",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:                "ceph.osd_id": "1",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:                "ceph.type": "block",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:                "ceph.vdo": "0",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:                "ceph.with_tpm": "0"
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:            },
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:            "type": "block",
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:            "vg_name": "ceph_vg0"
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:        }
Dec  6 04:48:55 np0005548915 eager_beaver[135977]:    ]
Dec  6 04:48:55 np0005548915 eager_beaver[135977]: }
Dec  6 04:48:55 np0005548915 systemd[1]: libpod-4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d.scope: Deactivated successfully.
Dec  6 04:48:55 np0005548915 podman[135917]: 2025-12-06 09:48:55.700030458 +0000 UTC m=+0.445782773 container died 4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_beaver, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 04:48:55 np0005548915 systemd[1]: var-lib-containers-storage-overlay-3c14c41c9117632a778fc33806d1c232821398b0f48520b4949428e6246bb207-merged.mount: Deactivated successfully.
Dec  6 04:48:55 np0005548915 podman[135917]: 2025-12-06 09:48:55.744643833 +0000 UTC m=+0.490396148 container remove 4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  6 04:48:55 np0005548915 systemd[1]: libpod-conmon-4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d.scope: Deactivated successfully.
Dec  6 04:48:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:56 np0005548915 python3.9[136223]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014535.226689-331-14089768234595/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=66483f330c598e63a8652032707c5bbf72ed3439 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:48:56 np0005548915 podman[136264]: 2025-12-06 09:48:56.256365806 +0000 UTC m=+0.040462085 container create 1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_moser, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  6 04:48:56 np0005548915 systemd[1]: Started libpod-conmon-1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd.scope.
Dec  6 04:48:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:56 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:56 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:48:56 np0005548915 podman[136264]: 2025-12-06 09:48:56.237731162 +0000 UTC m=+0.021827401 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:48:56 np0005548915 podman[136264]: 2025-12-06 09:48:56.339183212 +0000 UTC m=+0.123279471 container init 1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  6 04:48:56 np0005548915 podman[136264]: 2025-12-06 09:48:56.347368073 +0000 UTC m=+0.131464352 container start 1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:48:56 np0005548915 podman[136264]: 2025-12-06 09:48:56.352069881 +0000 UTC m=+0.136166120 container attach 1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS)
Dec  6 04:48:56 np0005548915 ecstatic_moser[136305]: 167 167
Dec  6 04:48:56 np0005548915 systemd[1]: libpod-1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd.scope: Deactivated successfully.
Dec  6 04:48:56 np0005548915 conmon[136305]: conmon 1d52d602eaa4a6408d4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd.scope/container/memory.events
Dec  6 04:48:56 np0005548915 podman[136264]: 2025-12-06 09:48:56.354275939 +0000 UTC m=+0.138372168 container died 1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  6 04:48:56 np0005548915 systemd[1]: var-lib-containers-storage-overlay-23615edcbcd145c5cace3d57e3100661321ce32f634ac25b1f6e59db6cfee4c8-merged.mount: Deactivated successfully.
Dec  6 04:48:56 np0005548915 podman[136264]: 2025-12-06 09:48:56.394506537 +0000 UTC m=+0.178602776 container remove 1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_moser, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 04:48:56 np0005548915 systemd[1]: libpod-conmon-1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd.scope: Deactivated successfully.
Dec  6 04:48:56 np0005548915 podman[136386]: 2025-12-06 09:48:56.544085757 +0000 UTC m=+0.047484314 container create a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:48:56 np0005548915 systemd[1]: Started libpod-conmon-a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a.scope.
Dec  6 04:48:56 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:48:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b84789a9b10a89fffb12b1a1d05d938fe6c970a90ab42d2749b8b6cba40fbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:48:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b84789a9b10a89fffb12b1a1d05d938fe6c970a90ab42d2749b8b6cba40fbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:48:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b84789a9b10a89fffb12b1a1d05d938fe6c970a90ab42d2749b8b6cba40fbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:48:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b84789a9b10a89fffb12b1a1d05d938fe6c970a90ab42d2749b8b6cba40fbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:48:56 np0005548915 podman[136386]: 2025-12-06 09:48:56.526098241 +0000 UTC m=+0.029496818 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:48:56 np0005548915 podman[136386]: 2025-12-06 09:48:56.621664003 +0000 UTC m=+0.125062590 container init a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Dec  6 04:48:56 np0005548915 podman[136386]: 2025-12-06 09:48:56.63120336 +0000 UTC m=+0.134601917 container start a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  6 04:48:56 np0005548915 podman[136386]: 2025-12-06 09:48:56.634632163 +0000 UTC m=+0.138030740 container attach a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:48:56 np0005548915 python3.9[136477]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:48:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:56.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:48:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:56.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:48:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:57 np0005548915 lvm[136699]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:48:57 np0005548915 lvm[136699]: VG ceph_vg0 finished
Dec  6 04:48:57 np0005548915 festive_buck[136433]: {}
Dec  6 04:48:57 np0005548915 systemd[1]: libpod-a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a.scope: Deactivated successfully.
Dec  6 04:48:57 np0005548915 systemd[1]: libpod-a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a.scope: Consumed 1.139s CPU time.
Dec  6 04:48:57 np0005548915 podman[136386]: 2025-12-06 09:48:57.348383042 +0000 UTC m=+0.851781589 container died a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 04:48:57 np0005548915 systemd[1]: var-lib-containers-storage-overlay-59b84789a9b10a89fffb12b1a1d05d938fe6c970a90ab42d2749b8b6cba40fbb-merged.mount: Deactivated successfully.
Dec  6 04:48:57 np0005548915 podman[136386]: 2025-12-06 09:48:57.407960022 +0000 UTC m=+0.911358579 container remove a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  6 04:48:57 np0005548915 systemd[1]: libpod-conmon-a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a.scope: Deactivated successfully.
Dec  6 04:48:57 np0005548915 python3.9[136702]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:48:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:48:57 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:48:57 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300027d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:48:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:57.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:48:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:57.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:57 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:48:58 np0005548915 python3.9[136892]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:48:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:58 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:58 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:58 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:48:58 np0005548915 python3.9[137015]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014537.6452935-503-218300622449219/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f56768d8301ea8c395a30ac1f665faa430ee5af5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:48:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:59 np0005548915 python3.9[137167]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:48:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:48:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:48:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:48:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:59.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:48:59 np0005548915 python3.9[137293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014538.718619-503-27978408503817/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=72139a22070e52361b83b34c98df3f4b6e2a8fd5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:48:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:48:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:48:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:59.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:48:59 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:49:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:00 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300027d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:00 np0005548915 python3.9[137445]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:00] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:49:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:00] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:49:00 np0005548915 python3.9[137568]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014539.7986743-503-18492507804128/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=eabf096ef39cf63ff907ddd7ef692acd9da19772 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:49:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:01.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:49:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:01.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:01 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:49:02 np0005548915 python3.9[137722]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:49:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:02 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:02 np0005548915 python3.9[137874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:03 np0005548915 python3.9[138001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014542.4199197-704-238500884477933/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=22c202a539af259b977a1afda61dbc1fe0d1039c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:03.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:03.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:03 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:49:04 np0005548915 python3.9[138153]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:49:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:04 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:49:04 np0005548915 python3.9[138305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:05 np0005548915 python3.9[138429]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014544.3093698-777-77760674241667/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=22c202a539af259b977a1afda61dbc1fe0d1039c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:05.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:49:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:05.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:49:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:49:06 np0005548915 python3.9[138582]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:49:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:06 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:06 np0005548915 python3.9[138734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:06.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:49:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:06.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:49:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:06.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:49:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:07 np0005548915 python3.9[138883]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014546.2245512-847-108239935421504/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=22c202a539af259b977a1afda61dbc1fe0d1039c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:49:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:07.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:49:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:07.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:07 np0005548915 python3.9[139036]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:49:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:49:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:08 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:08 np0005548915 python3.9[139188]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:49:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:49:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:09 np0005548915 python3.9[139312]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014548.1095388-916-21188724777257/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=22c202a539af259b977a1afda61dbc1fe0d1039c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:49:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:09.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:09.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:09 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:49:09 np0005548915 python3.9[139465]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:49:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:10 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:10 np0005548915 python3.9[139617]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:10] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:49:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:10] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:49:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:11 np0005548915 python3.9[139741]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014550.1217773-988-80986800049945/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=22c202a539af259b977a1afda61dbc1fe0d1039c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:49:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:11.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:49:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:11.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:49:12 np0005548915 python3.9[139894]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:49:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:12 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:12 np0005548915 python3.9[140046]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:13 np0005548915 python3.9[140171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014552.2562168-1067-18270365404215/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=22c202a539af259b977a1afda61dbc1fe0d1039c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:13.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:49:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:13.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:49:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:49:14 np0005548915 systemd[1]: session-48.scope: Deactivated successfully.
Dec  6 04:49:14 np0005548915 systemd[1]: session-48.scope: Consumed 22.935s CPU time.
Dec  6 04:49:14 np0005548915 systemd-logind[795]: Session 48 logged out. Waiting for processes to exit.
Dec  6 04:49:14 np0005548915 systemd-logind[795]: Removed session 48.
Dec  6 04:49:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:14 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:49:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:15.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:15.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.207291) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014556207749, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 988, "num_deletes": 251, "total_data_size": 1821410, "memory_usage": 1841248, "flush_reason": "Manual Compaction"}
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014556230599, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1766352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12314, "largest_seqno": 13300, "table_properties": {"data_size": 1761486, "index_size": 2454, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10182, "raw_average_key_size": 19, "raw_value_size": 1751812, "raw_average_value_size": 3299, "num_data_blocks": 109, "num_entries": 531, "num_filter_entries": 531, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765014469, "oldest_key_time": 1765014469, "file_creation_time": 1765014556, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 23032 microseconds, and 7046 cpu microseconds.
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.230653) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1766352 bytes OK
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.230674) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.232953) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.232969) EVENT_LOG_v1 {"time_micros": 1765014556232964, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.232988) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1816859, prev total WAL file size 1816859, number of live WAL files 2.
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.233773) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1724KB)], [29(13MB)]
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014556233832, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 15981979, "oldest_snapshot_seqno": -1}
Dec  6 04:49:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:16 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4291 keys, 14024825 bytes, temperature: kUnknown
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014556385821, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 14024825, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13992830, "index_size": 20173, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 109807, "raw_average_key_size": 25, "raw_value_size": 13911154, "raw_average_value_size": 3241, "num_data_blocks": 852, "num_entries": 4291, "num_filter_entries": 4291, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765014556, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.386043) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 14024825 bytes
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.387582) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.1 rd, 92.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 13.6 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(17.0) write-amplify(7.9) OK, records in: 4809, records dropped: 518 output_compression: NoCompression
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.387602) EVENT_LOG_v1 {"time_micros": 1765014556387591, "job": 12, "event": "compaction_finished", "compaction_time_micros": 152069, "compaction_time_cpu_micros": 27850, "output_level": 6, "num_output_files": 1, "total_output_size": 14024825, "num_input_records": 4809, "num_output_records": 4291, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014556388125, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014556391073, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.233671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.391266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.391276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.391280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.391282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:49:16 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.391284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:49:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:16.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:49:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:49:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:17.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:49:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:17.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:17 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:49:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:18 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094919 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:49:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:49:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:19.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:49:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:19.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:49:19 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:49:20 np0005548915 systemd-logind[795]: New session 49 of user zuul.
Dec  6 04:49:20 np0005548915 systemd[1]: Started Session 49 of User zuul.
Dec  6 04:49:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:20 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:20] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:49:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:20] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:49:20 np0005548915 python3.9[140359]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:49:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:21.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:49:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:21.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:21 np0005548915 python3.9[140513]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:21 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:49:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:22 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:22 np0005548915 python3.9[140636]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014561.1432085-62-111105369052718/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=944de880f37676f80f6e04a4864888bf3f7decbf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:23 np0005548915 python3.9[140789]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:49:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:23.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:49:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:23.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:49:23
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', '.nfs', '.mgr', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups']
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:49:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:49:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 425 B/s rd, 0 op/s
Dec  6 04:49:23 np0005548915 python3.9[140913]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014562.848916-62-177836066296549/.source.conf _original_basename=ceph.conf follow=False checksum=531c84d7e2c99e4f6cf7d56dd7b16abeaf31bfa1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:49:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:49:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:49:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:49:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:49:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:49:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:49:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:24 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:24 np0005548915 systemd[1]: session-49.scope: Deactivated successfully.
Dec  6 04:49:24 np0005548915 systemd[1]: session-49.scope: Consumed 3.144s CPU time.
Dec  6 04:49:24 np0005548915 systemd-logind[795]: Session 49 logged out. Waiting for processes to exit.
Dec  6 04:49:24 np0005548915 systemd-logind[795]: Removed session 49.
Dec  6 04:49:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:49:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:49:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:49:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:49:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:49:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:49:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:49:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:25.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:49:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:49:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:25.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:49:25 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:49:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:26 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:26.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:49:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:26.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:49:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:27.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:49:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:27.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:27 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec  6 04:49:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:28 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:49:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:29.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:49:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:29.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:49:29 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec  6 04:49:30 np0005548915 systemd-logind[795]: New session 50 of user zuul.
Dec  6 04:49:30 np0005548915 systemd[1]: Started Session 50 of User zuul.
Dec  6 04:49:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180041e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:49:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:49:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:30] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec  6 04:49:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:30] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec  6 04:49:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:31 np0005548915 python3.9[141122]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:49:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:31.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:49:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:31.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:49:31 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec  6 04:49:32 np0005548915 python3.9[141280]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:49:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:32 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:32 np0005548915 python3.9[141432]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:49:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:49:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:33.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:49:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:49:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:33.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:33 np0005548915 python3.9[141584]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:49:33 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 937 B/s wr, 3 op/s
Dec  6 04:49:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:34 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:49:34 np0005548915 python3.9[141736]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  6 04:49:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:35.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:35.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:35 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:49:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:36 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:36 np0005548915 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec  6 04:49:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:36.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:49:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:36.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:49:37 np0005548915 python3.9[141894]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:49:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:37.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 04:49:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8411 writes, 34K keys, 8411 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 8411 writes, 1732 syncs, 4.86 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8411 writes, 34K keys, 8411 commit groups, 1.0 writes per commit group, ingest: 21.58 MB, 0.04 MB/s#012Interval WAL: 8411 writes, 1732 syncs, 4.86 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  6 04:49:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:49:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:37.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:49:37 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 04:49:37 np0005548915 python3.9[141980]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:49:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:38 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:49:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:49:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094939 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:49:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:49:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:49:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:39.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:49:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:39.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:39 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 04:49:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:40 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:40 np0005548915 python3.9[142135]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 04:49:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:40] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 04:49:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:40] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 04:49:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:41.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:41.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:41 np0005548915 python3[142292]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  6 04:49:41 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 04:49:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:42 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:42 np0005548915 python3.9[142444]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:43 np0005548915 python3.9[142597]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:43.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:43.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:43 np0005548915 python3.9[142676]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:43 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 04:49:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:44 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:49:44 np0005548915 python3.9[142828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:44 np0005548915 python3.9[142906]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.oxc8qy1t recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:45.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:45 np0005548915 python3.9[143060]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:45.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:45 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:49:46 np0005548915 python3.9[143138]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:46 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:46.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:49:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:46.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:49:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:46.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:49:47 np0005548915 python3.9[143315]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:49:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:47.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:47.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:47 np0005548915 python3[143472]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  6 04:49:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:49:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:48 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:48 np0005548915 python3.9[143624]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:49 np0005548915 python3.9[143750]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014588.2157028-431-20748854031508/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:49:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:49.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:49.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:49:50 np0005548915 python3.9[143903]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:50 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:50 np0005548915 python3.9[144028]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014589.6290662-476-171515448248764/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:50] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 04:49:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:50] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 04:49:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:51 np0005548915 python3.9[144182]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:49:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:51.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:49:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:51.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:49:52 np0005548915 python3.9[144307]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014591.0313091-521-1512425184116/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:52 np0005548915 python3.9[144459]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:53 np0005548915 python3.9[144585]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014592.301395-566-199565958401142/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:49:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:53.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:49:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:49:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:53.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:49:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:49:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:49:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:49:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:49:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:49:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:49:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:49:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:49:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:49:54 np0005548915 python3.9[144738]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:49:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:54 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:49:54 np0005548915 python3.9[144865]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014593.6583588-611-58609445119118/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:55.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:55 np0005548915 python3.9[145019]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:55.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:49:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:56 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:56 np0005548915 python3.9[145171]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:49:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:57.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:49:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:57 np0005548915 python3.9[145327]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:49:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:49:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:57.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:49:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:49:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:57.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:49:57 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:49:58 np0005548915 python3.9[145530]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:49:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:58 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:49:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:49:58 np0005548915 python3.9[145716]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:49:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:59 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:49:59 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:49:59 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:49:59 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:49:59 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:49:59 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:49:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:49:59 np0005548915 podman[145922]: 2025-12-06 09:49:59.463688147 +0000 UTC m=+0.046695685 container create 53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:49:59 np0005548915 systemd[1]: Started libpod-conmon-53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87.scope.
Dec  6 04:49:59 np0005548915 podman[145922]: 2025-12-06 09:49:59.443405645 +0000 UTC m=+0.026413233 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:49:59 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:49:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:49:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:49:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:59.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:49:59 np0005548915 podman[145922]: 2025-12-06 09:49:59.651215304 +0000 UTC m=+0.234222902 container init 53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_torvalds, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 04:49:59 np0005548915 podman[145922]: 2025-12-06 09:49:59.664702938 +0000 UTC m=+0.247710486 container start 53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_torvalds, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:49:59 np0005548915 youthful_torvalds[145977]: 167 167
Dec  6 04:49:59 np0005548915 systemd[1]: libpod-53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87.scope: Deactivated successfully.
Dec  6 04:49:59 np0005548915 podman[145922]: 2025-12-06 09:49:59.676885798 +0000 UTC m=+0.259893366 container attach 53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_torvalds, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 04:49:59 np0005548915 podman[145922]: 2025-12-06 09:49:59.678069208 +0000 UTC m=+0.261076746 container died 53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  6 04:49:59 np0005548915 python3.9[145978]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:49:59 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6937766aa5949b2b3803fb16e76de0a3cb41aa137e4e303bd57838871dd8e1db-merged.mount: Deactivated successfully.
Dec  6 04:49:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:49:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:49:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:59.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:49:59 np0005548915 podman[145922]: 2025-12-06 09:49:59.809837093 +0000 UTC m=+0.392844641 container remove 53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_torvalds, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 04:49:59 np0005548915 systemd[1]: libpod-conmon-53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87.scope: Deactivated successfully.
Dec  6 04:49:59 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:50:00 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec  6 04:50:00 np0005548915 podman[146056]: 2025-12-06 09:50:00.005294868 +0000 UTC m=+0.044827726 container create 5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:50:00 np0005548915 systemd[1]: Started libpod-conmon-5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713.scope.
Dec  6 04:50:00 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:50:00 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda975b5420da00c8448e47d324f03a6f5ae14aede0d2dba8f4f5a91713f9454/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:50:00 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda975b5420da00c8448e47d324f03a6f5ae14aede0d2dba8f4f5a91713f9454/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:50:00 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda975b5420da00c8448e47d324f03a6f5ae14aede0d2dba8f4f5a91713f9454/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:50:00 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda975b5420da00c8448e47d324f03a6f5ae14aede0d2dba8f4f5a91713f9454/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:50:00 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda975b5420da00c8448e47d324f03a6f5ae14aede0d2dba8f4f5a91713f9454/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:50:00 np0005548915 podman[146056]: 2025-12-06 09:49:59.985016157 +0000 UTC m=+0.024549035 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:50:00 np0005548915 podman[146056]: 2025-12-06 09:50:00.094789015 +0000 UTC m=+0.134321963 container init 5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ishizaka, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:50:00 np0005548915 podman[146056]: 2025-12-06 09:50:00.104195641 +0000 UTC m=+0.143728499 container start 5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ishizaka, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:50:00 np0005548915 podman[146056]: 2025-12-06 09:50:00.10797734 +0000 UTC m=+0.147510198 container attach 5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ishizaka, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 04:50:00 np0005548915 ceph-mon[74327]: overall HEALTH_OK
Dec  6 04:50:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:00 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:00 np0005548915 python3.9[146179]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:50:00 np0005548915 amazing_ishizaka[146120]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:50:00 np0005548915 amazing_ishizaka[146120]: --> All data devices are unavailable
Dec  6 04:50:00 np0005548915 systemd[1]: libpod-5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713.scope: Deactivated successfully.
Dec  6 04:50:00 np0005548915 podman[146056]: 2025-12-06 09:50:00.522441667 +0000 UTC m=+0.561974525 container died 5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:50:00 np0005548915 systemd[1]: var-lib-containers-storage-overlay-dda975b5420da00c8448e47d324f03a6f5ae14aede0d2dba8f4f5a91713f9454-merged.mount: Deactivated successfully.
Dec  6 04:50:00 np0005548915 podman[146056]: 2025-12-06 09:50:00.62205034 +0000 UTC m=+0.661583228 container remove 5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:50:00 np0005548915 systemd[1]: libpod-conmon-5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713.scope: Deactivated successfully.
Dec  6 04:50:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:00] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Dec  6 04:50:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:00] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Dec  6 04:50:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:01 np0005548915 podman[146373]: 2025-12-06 09:50:01.207736876 +0000 UTC m=+0.023436015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:50:01 np0005548915 podman[146373]: 2025-12-06 09:50:01.458380538 +0000 UTC m=+0.274079717 container create 64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:50:01 np0005548915 systemd[1]: Started libpod-conmon-64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec.scope.
Dec  6 04:50:01 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:50:01 np0005548915 podman[146373]: 2025-12-06 09:50:01.55720842 +0000 UTC m=+0.372907559 container init 64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:50:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:01 np0005548915 podman[146373]: 2025-12-06 09:50:01.564791068 +0000 UTC m=+0.380490187 container start 64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chatelet, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:50:01 np0005548915 podman[146373]: 2025-12-06 09:50:01.568304031 +0000 UTC m=+0.384003160 container attach 64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:50:01 np0005548915 gallant_chatelet[146464]: 167 167
Dec  6 04:50:01 np0005548915 systemd[1]: libpod-64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec.scope: Deactivated successfully.
Dec  6 04:50:01 np0005548915 podman[146373]: 2025-12-06 09:50:01.57095022 +0000 UTC m=+0.386649349 container died 64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chatelet, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Dec  6 04:50:01 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6990fa6707aa1de6810e53c68f35a85734b6733509beec674fc5969e21738847-merged.mount: Deactivated successfully.
Dec  6 04:50:01 np0005548915 podman[146373]: 2025-12-06 09:50:01.615089987 +0000 UTC m=+0.430789126 container remove 64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:50:01 np0005548915 systemd[1]: libpod-conmon-64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec.scope: Deactivated successfully.
Dec  6 04:50:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:01.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:01 np0005548915 python3.9[146461]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:50:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:01.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:01 np0005548915 podman[146490]: 2025-12-06 09:50:01.815614175 +0000 UTC m=+0.089160559 container create fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  6 04:50:01 np0005548915 podman[146490]: 2025-12-06 09:50:01.751831622 +0000 UTC m=+0.025378016 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:50:01 np0005548915 systemd[1]: Started libpod-conmon-fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9.scope.
Dec  6 04:50:01 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:50:01 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6906b50ca970bb4417f9b6f521662f12e7762a3747266f2c87ed36a1d55abac2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:50:01 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6906b50ca970bb4417f9b6f521662f12e7762a3747266f2c87ed36a1d55abac2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:50:01 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6906b50ca970bb4417f9b6f521662f12e7762a3747266f2c87ed36a1d55abac2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:50:01 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6906b50ca970bb4417f9b6f521662f12e7762a3747266f2c87ed36a1d55abac2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:50:01 np0005548915 podman[146490]: 2025-12-06 09:50:01.920517715 +0000 UTC m=+0.194064139 container init fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jennings, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  6 04:50:01 np0005548915 podman[146490]: 2025-12-06 09:50:01.929877451 +0000 UTC m=+0.203423865 container start fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jennings, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:50:01 np0005548915 podman[146490]: 2025-12-06 09:50:01.941612089 +0000 UTC m=+0.215158493 container attach fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jennings, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:50:01 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:50:02 np0005548915 loving_jennings[146530]: {
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:    "1": [
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:        {
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:            "devices": [
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:                "/dev/loop3"
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:            ],
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:            "lv_name": "ceph_lv0",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:            "lv_size": "21470642176",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:            "name": "ceph_lv0",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:            "tags": {
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:                "ceph.cluster_name": "ceph",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:                "ceph.crush_device_class": "",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:                "ceph.encrypted": "0",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:                "ceph.osd_id": "1",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:                "ceph.type": "block",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:                "ceph.vdo": "0",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:                "ceph.with_tpm": "0"
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:            },
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:            "type": "block",
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:            "vg_name": "ceph_vg0"
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:        }
Dec  6 04:50:02 np0005548915 loving_jennings[146530]:    ]
Dec  6 04:50:02 np0005548915 loving_jennings[146530]: }
Dec  6 04:50:02 np0005548915 systemd[1]: libpod-fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9.scope: Deactivated successfully.
Dec  6 04:50:02 np0005548915 podman[146490]: 2025-12-06 09:50:02.247514099 +0000 UTC m=+0.521060483 container died fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:50:02 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6906b50ca970bb4417f9b6f521662f12e7762a3747266f2c87ed36a1d55abac2-merged.mount: Deactivated successfully.
Dec  6 04:50:02 np0005548915 podman[146490]: 2025-12-06 09:50:02.293172096 +0000 UTC m=+0.566718460 container remove fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  6 04:50:02 np0005548915 systemd[1]: libpod-conmon-fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9.scope: Deactivated successfully.
Dec  6 04:50:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:02 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:02 np0005548915 podman[146769]: 2025-12-06 09:50:02.901569049 +0000 UTC m=+0.049540040 container create 136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hugle, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 04:50:02 np0005548915 systemd[1]: Started libpod-conmon-136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854.scope.
Dec  6 04:50:02 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:50:02 np0005548915 python3.9[146762]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:50:02 np0005548915 podman[146769]: 2025-12-06 09:50:02.882644093 +0000 UTC m=+0.030615084 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:50:02 np0005548915 podman[146769]: 2025-12-06 09:50:02.986439884 +0000 UTC m=+0.134410875 container init 136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  6 04:50:02 np0005548915 ovs-vsctl[146788]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  6 04:50:02 np0005548915 podman[146769]: 2025-12-06 09:50:02.995248255 +0000 UTC m=+0.143219226 container start 136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hugle, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 04:50:03 np0005548915 funny_hugle[146785]: 167 167
Dec  6 04:50:03 np0005548915 systemd[1]: libpod-136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854.scope: Deactivated successfully.
Dec  6 04:50:03 np0005548915 podman[146769]: 2025-12-06 09:50:03.00421039 +0000 UTC m=+0.152181381 container attach 136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  6 04:50:03 np0005548915 podman[146769]: 2025-12-06 09:50:03.00460679 +0000 UTC m=+0.152577761 container died 136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  6 04:50:03 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c35d22a53b864f392bc466fd8827324cca5e67b348e334072e578f0332ecc931-merged.mount: Deactivated successfully.
Dec  6 04:50:03 np0005548915 podman[146769]: 2025-12-06 09:50:03.047042784 +0000 UTC m=+0.195013755 container remove 136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hugle, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  6 04:50:03 np0005548915 systemd[1]: libpod-conmon-136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854.scope: Deactivated successfully.
Dec  6 04:50:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:03 np0005548915 podman[146834]: 2025-12-06 09:50:03.202087529 +0000 UTC m=+0.050009073 container create 8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:50:03 np0005548915 systemd[1]: Started libpod-conmon-8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07.scope.
Dec  6 04:50:03 np0005548915 podman[146834]: 2025-12-06 09:50:03.179009164 +0000 UTC m=+0.026930698 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:50:03 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:50:03 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2955ddc6b6797ff1af59075fc9e8789b9d23a05616e65f3707c836419f1750/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:50:03 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2955ddc6b6797ff1af59075fc9e8789b9d23a05616e65f3707c836419f1750/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:50:03 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2955ddc6b6797ff1af59075fc9e8789b9d23a05616e65f3707c836419f1750/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:50:03 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2955ddc6b6797ff1af59075fc9e8789b9d23a05616e65f3707c836419f1750/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:50:03 np0005548915 podman[146834]: 2025-12-06 09:50:03.307115512 +0000 UTC m=+0.155037076 container init 8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  6 04:50:03 np0005548915 podman[146834]: 2025-12-06 09:50:03.314526397 +0000 UTC m=+0.162447951 container start 8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:50:03 np0005548915 podman[146834]: 2025-12-06 09:50:03.320941245 +0000 UTC m=+0.168862809 container attach 8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec  6 04:50:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:03.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:03.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:03 np0005548915 python3.9[147023]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:50:03 np0005548915 lvm[147057]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:50:03 np0005548915 lvm[147057]: VG ceph_vg0 finished
Dec  6 04:50:03 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:50:03 np0005548915 lvm[147083]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:50:03 np0005548915 lvm[147083]: VG ceph_vg0 finished
Dec  6 04:50:04 np0005548915 agitated_chebyshev[146852]: {}
Dec  6 04:50:04 np0005548915 systemd[1]: libpod-8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07.scope: Deactivated successfully.
Dec  6 04:50:04 np0005548915 podman[146834]: 2025-12-06 09:50:04.034358251 +0000 UTC m=+0.882279785 container died 8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:50:04 np0005548915 systemd[1]: libpod-8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07.scope: Consumed 1.171s CPU time.
Dec  6 04:50:04 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ab2955ddc6b6797ff1af59075fc9e8789b9d23a05616e65f3707c836419f1750-merged.mount: Deactivated successfully.
Dec  6 04:50:04 np0005548915 podman[146834]: 2025-12-06 09:50:04.074551014 +0000 UTC m=+0.922472548 container remove 8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  6 04:50:04 np0005548915 systemd[1]: libpod-conmon-8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07.scope: Deactivated successfully.
Dec  6 04:50:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:50:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:50:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:50:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:50:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:04 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:50:04 np0005548915 python3.9[147248]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:50:04 np0005548915 ovs-vsctl[147249]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec  6 04:50:05 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:50:05 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:50:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:05 np0005548915 python3.9[147400]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:50:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:05.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:05.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:50:06 np0005548915 python3.9[147555]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:50:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:06 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:06 np0005548915 python3.9[147707]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:50:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:07.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:50:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:07.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:50:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:07 np0005548915 python3.9[147811]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:50:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003ce0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:07.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:07.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:07 np0005548915 python3.9[147964]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:50:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:50:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:08 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:08 np0005548915 python3.9[148042]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:50:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:50:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:50:09 np0005548915 python3.9[148194]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:50:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:50:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:09.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:09.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:09 np0005548915 python3.9[148348]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:50:09 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:50:10 np0005548915 python3.9[148426]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:50:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:10 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:10] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  6 04:50:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:10] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  6 04:50:10 np0005548915 python3.9[148578]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:50:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:11 np0005548915 python3.9[148658]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:50:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 04:50:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:11.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 04:50:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:11.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:50:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:12 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:12 np0005548915 python3.9[148810]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:50:12 np0005548915 systemd[1]: Reloading.
Dec  6 04:50:12 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:50:12 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:50:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:13 np0005548915 python3.9[149001]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:50:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:13.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:13.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:50:14 np0005548915 python3.9[149079]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:50:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:14 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:50:14 np0005548915 python3.9[149231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:50:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:15 np0005548915 python3.9[149310]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:50:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:15.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:15.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:50:16 np0005548915 python3.9[149463]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:50:16 np0005548915 systemd[1]: Reloading.
Dec  6 04:50:16 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:50:16 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:50:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:16 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:16 np0005548915 systemd[1]: Starting Create netns directory...
Dec  6 04:50:16 np0005548915 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  6 04:50:16 np0005548915 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  6 04:50:16 np0005548915 systemd[1]: Finished Create netns directory.
Dec  6 04:50:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:17.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:50:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:17 np0005548915 python3.9[149658]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:50:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:17.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:17.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:17 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 04:50:18 np0005548915 python3.9[149811]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:50:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:18 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:18 np0005548915 python3.9[149934]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014617.5529356-1364-276889325439279/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:50:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095019 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:50:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:50:19 np0005548915 python3.9[150091]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:50:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018002490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:19.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:19.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:19 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:50:20 np0005548915 python3.9[150243]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:50:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:20 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300037e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:20 np0005548915 python3.9[150366]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014619.7056198-1439-207679309671299/.source.json _original_basename=.mdcixkao follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:50:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:20] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  6 04:50:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:20] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  6 04:50:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:21 np0005548915 python3.9[150519]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:50:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:21.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:21.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:21 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:50:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:22 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018002490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 04:50:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:23.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 04:50:23 np0005548915 python3.9[150949]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  6 04:50:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:23.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:50:23
Dec  6 04:50:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:50:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:50:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'volumes', '.nfs', 'vms', 'default.rgw.control']
Dec  6 04:50:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:50:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:50:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:50:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:50:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:50:23 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:50:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:50:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:50:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:50:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:50:23 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:50:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:24 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:50:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:50:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:50:24 np0005548915 python3.9[151101]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  6 04:50:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018002490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:25.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:25.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:25 np0005548915 python3.9[151255]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  6 04:50:25 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:50:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:26 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:26 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:50:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:27.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:50:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018002490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:27.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:27 np0005548915 python3[151461]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  6 04:50:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:27.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:27 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Dec  6 04:50:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:28 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:50:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:29.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:29.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:29 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec  6 04:50:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:50:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:50:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:50:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:30] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  6 04:50:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:30] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  6 04:50:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 04:50:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:31.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 04:50:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:31.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:31 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec  6 04:50:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:32 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:32 np0005548915 podman[151474]: 2025-12-06 09:50:32.838973163 +0000 UTC m=+5.095633200 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c
Dec  6 04:50:32 np0005548915 podman[151596]: 2025-12-06 09:50:32.964792592 +0000 UTC m=+0.046010517 container create ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 04:50:32 np0005548915 podman[151596]: 2025-12-06 09:50:32.939115609 +0000 UTC m=+0.020333544 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c
Dec  6 04:50:32 np0005548915 python3[151461]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c
Dec  6 04:50:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:50:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:33.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:33.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:33 np0005548915 python3.9[151788]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:50:33 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 937 B/s wr, 3 op/s
Dec  6 04:50:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:34 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:50:34 np0005548915 python3.9[151942]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:50:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:35 np0005548915 python3.9[152019]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:50:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:35.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:35.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:35 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:50:36 np0005548915 python3.9[152171]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014635.453923-1703-231319631757280/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:50:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:36 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:36 np0005548915 python3.9[152247]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 04:50:36 np0005548915 systemd[1]: Reloading.
Dec  6 04:50:36 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:50:36 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:50:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:37.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:50:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:37.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:50:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:37.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:37 np0005548915 python3.9[152360]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:50:37 np0005548915 systemd[1]: Reloading.
Dec  6 04:50:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:37.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:37 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:50:37 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:50:37 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Dec  6 04:50:38 np0005548915 systemd[1]: Starting ovn_controller container...
Dec  6 04:50:38 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:50:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613f756a1f73dc4a39e91ac477d7099b198677971a4e550307612100884cde52/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  6 04:50:38 np0005548915 systemd[1]: Started /usr/bin/podman healthcheck run ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab.
Dec  6 04:50:38 np0005548915 podman[152401]: 2025-12-06 09:50:38.180955131 +0000 UTC m=+0.132340461 container init ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: + sudo -E kolla_set_configs
Dec  6 04:50:38 np0005548915 podman[152401]: 2025-12-06 09:50:38.211376318 +0000 UTC m=+0.162761658 container start ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  6 04:50:38 np0005548915 edpm-start-podman-container[152401]: ovn_controller
Dec  6 04:50:38 np0005548915 systemd[1]: Created slice User Slice of UID 0.
Dec  6 04:50:38 np0005548915 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec  6 04:50:38 np0005548915 edpm-start-podman-container[152400]: Creating additional drop-in dependency for "ovn_controller" (ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab)
Dec  6 04:50:38 np0005548915 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec  6 04:50:38 np0005548915 podman[152424]: 2025-12-06 09:50:38.288571653 +0000 UTC m=+0.066761532 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  6 04:50:38 np0005548915 systemd[1]: Starting User Manager for UID 0...
Dec  6 04:50:38 np0005548915 systemd[1]: ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab-2e149f55b40c15c1.service: Main process exited, code=exited, status=1/FAILURE
Dec  6 04:50:38 np0005548915 systemd[1]: ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab-2e149f55b40c15c1.service: Failed with result 'exit-code'.
Dec  6 04:50:38 np0005548915 systemd[1]: Reloading.
Dec  6 04:50:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:38 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:38 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:50:38 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:50:38 np0005548915 systemd[1]: Started ovn_controller container.
Dec  6 04:50:38 np0005548915 systemd[152463]: Queued start job for default target Main User Target.
Dec  6 04:50:38 np0005548915 systemd[152463]: Created slice User Application Slice.
Dec  6 04:50:38 np0005548915 systemd[152463]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec  6 04:50:38 np0005548915 systemd[152463]: Started Daily Cleanup of User's Temporary Directories.
Dec  6 04:50:38 np0005548915 systemd[152463]: Reached target Paths.
Dec  6 04:50:38 np0005548915 systemd[152463]: Reached target Timers.
Dec  6 04:50:38 np0005548915 systemd[152463]: Starting D-Bus User Message Bus Socket...
Dec  6 04:50:38 np0005548915 systemd[152463]: Starting Create User's Volatile Files and Directories...
Dec  6 04:50:38 np0005548915 systemd[152463]: Listening on D-Bus User Message Bus Socket.
Dec  6 04:50:38 np0005548915 systemd[152463]: Reached target Sockets.
Dec  6 04:50:38 np0005548915 systemd[152463]: Finished Create User's Volatile Files and Directories.
Dec  6 04:50:38 np0005548915 systemd[152463]: Reached target Basic System.
Dec  6 04:50:38 np0005548915 systemd[152463]: Reached target Main User Target.
Dec  6 04:50:38 np0005548915 systemd[152463]: Startup finished in 151ms.
Dec  6 04:50:38 np0005548915 systemd[1]: Started User Manager for UID 0.
Dec  6 04:50:38 np0005548915 systemd[1]: Started Session c1 of User root.
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: INFO:__main__:Validating config file
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: INFO:__main__:Writing out command to execute
Dec  6 04:50:38 np0005548915 systemd[1]: session-c1.scope: Deactivated successfully.
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: ++ cat /run_command
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: + ARGS=
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: + sudo kolla_copy_cacerts
Dec  6 04:50:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:50:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:50:38 np0005548915 systemd[1]: Started Session c2 of User root.
Dec  6 04:50:38 np0005548915 systemd[1]: session-c2.scope: Deactivated successfully.
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: + [[ ! -n '' ]]
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: + . kolla_extend_start
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: + umask 0022
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec  6 04:50:38 np0005548915 NetworkManager[48882]: <info>  [1765014638.9654] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec  6 04:50:38 np0005548915 NetworkManager[48882]: <info>  [1765014638.9665] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 04:50:38 np0005548915 NetworkManager[48882]: <info>  [1765014638.9679] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec  6 04:50:38 np0005548915 NetworkManager[48882]: <info>  [1765014638.9687] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec  6 04:50:38 np0005548915 NetworkManager[48882]: <info>  [1765014638.9691] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec  6 04:50:38 np0005548915 kernel: br-int: entered promiscuous mode
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00010|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00011|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00013|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00014|features|INFO|OVS Feature: ct_flush, state: supported
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00015|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00016|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00017|main|INFO|OVS feature set changed, force recompute.
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00021|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00022|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00023|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00024|main|INFO|OVS feature set changed, force recompute.
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  6 04:50:38 np0005548915 ovn_controller[152417]: 2025-12-06T09:50:38Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  6 04:50:38 np0005548915 NetworkManager[48882]: <info>  [1765014638.9838] manager: (ovn-127282-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec  6 04:50:38 np0005548915 NetworkManager[48882]: <info>  [1765014638.9846] manager: (ovn-1b31b2-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Dec  6 04:50:38 np0005548915 NetworkManager[48882]: <info>  [1765014638.9918] manager: (ovn-61eba4-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Dec  6 04:50:38 np0005548915 kernel: genev_sys_6081: entered promiscuous mode
Dec  6 04:50:38 np0005548915 NetworkManager[48882]: <info>  [1765014638.9995] device (genev_sys_6081): carrier: link connected
Dec  6 04:50:38 np0005548915 NetworkManager[48882]: <info>  [1765014638.9998] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Dec  6 04:50:39 np0005548915 systemd-udevd[152565]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 04:50:39 np0005548915 systemd-udevd[152566]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 04:50:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095039 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:50:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:50:39 np0005548915 python3.9[152686]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:50:39 np0005548915 ovs-vsctl[152687]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  6 04:50:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:39.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:39.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:39 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec  6 04:50:40 np0005548915 python3.9[152839]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:50:40 np0005548915 ovs-vsctl[152841]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  6 04:50:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:40 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:40] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:50:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:40] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:50:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:41 np0005548915 python3.9[152995]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:50:41 np0005548915 ovs-vsctl[152997]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  6 04:50:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:41.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:41.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:41 np0005548915 systemd[1]: session-50.scope: Deactivated successfully.
Dec  6 04:50:41 np0005548915 systemd[1]: session-50.scope: Consumed 58.239s CPU time.
Dec  6 04:50:41 np0005548915 systemd-logind[795]: Session 50 logged out. Waiting for processes to exit.
Dec  6 04:50:41 np0005548915 systemd-logind[795]: Removed session 50.
Dec  6 04:50:41 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec  6 04:50:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:42 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:50:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:43.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:50:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:43.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:43 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec  6 04:50:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:44 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:50:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:45.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:45.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:45 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:50:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:46 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:47.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:50:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:47.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:50:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:47.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:50:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:47 np0005548915 systemd-logind[795]: New session 52 of user zuul.
Dec  6 04:50:47 np0005548915 systemd[1]: Started Session 52 of User zuul.
Dec  6 04:50:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:50:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:47.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:50:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:47.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:50:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:48 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:48 np0005548915 python3.9[153206]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:50:49 np0005548915 systemd[1]: Stopping User Manager for UID 0...
Dec  6 04:50:49 np0005548915 systemd[152463]: Activating special unit Exit the Session...
Dec  6 04:50:49 np0005548915 systemd[152463]: Stopped target Main User Target.
Dec  6 04:50:49 np0005548915 systemd[152463]: Stopped target Basic System.
Dec  6 04:50:49 np0005548915 systemd[152463]: Stopped target Paths.
Dec  6 04:50:49 np0005548915 systemd[152463]: Stopped target Sockets.
Dec  6 04:50:49 np0005548915 systemd[152463]: Stopped target Timers.
Dec  6 04:50:49 np0005548915 systemd[152463]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  6 04:50:49 np0005548915 systemd[152463]: Closed D-Bus User Message Bus Socket.
Dec  6 04:50:49 np0005548915 systemd[152463]: Stopped Create User's Volatile Files and Directories.
Dec  6 04:50:49 np0005548915 systemd[152463]: Removed slice User Application Slice.
Dec  6 04:50:49 np0005548915 systemd[152463]: Reached target Shutdown.
Dec  6 04:50:49 np0005548915 systemd[152463]: Finished Exit the Session.
Dec  6 04:50:49 np0005548915 systemd[152463]: Reached target Exit the Session.
Dec  6 04:50:49 np0005548915 systemd[1]: user@0.service: Deactivated successfully.
Dec  6 04:50:49 np0005548915 systemd[1]: Stopped User Manager for UID 0.
Dec  6 04:50:49 np0005548915 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec  6 04:50:49 np0005548915 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec  6 04:50:49 np0005548915 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec  6 04:50:49 np0005548915 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec  6 04:50:49 np0005548915 systemd[1]: Removed slice User Slice of UID 0.
Dec  6 04:50:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:50:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:50:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:49.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:50:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:50:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:49.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:50:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:50:50 np0005548915 python3.9[153366]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:50:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:50 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:50 np0005548915 python3.9[153521]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:50:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:50] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:50:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:50] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:50:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:51 np0005548915 python3.9[153675]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:50:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:51.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:50:52 np0005548915 python3.9[153827]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:50:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:52 np0005548915 python3.9[153979]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:50:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:50:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:53.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:50:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:53.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:50:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:50:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:50:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:50:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:50:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:50:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:50:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:50:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:50:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:54 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:54 np0005548915 python3.9[154132]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:50:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:50:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:55 np0005548915 python3.9[154285]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  6 04:50:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028002660 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:55.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:50:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:55.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:50:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:56 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:57.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:50:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:57 np0005548915 python3.9[154440]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:50:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:50:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:57.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:50:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:50:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:57.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:50:57 np0005548915 python3.9[154562]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014656.5362813-218-108000303438830/.source follow=False _original_basename=haproxy.j2 checksum=cc5e97ea900947bff0c19d73b88d99840e041f49 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:50:57 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:50:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:58 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:58 np0005548915 python3.9[154712]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:50:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028002660 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:59 np0005548915 python3.9[154834]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014658.2848246-263-4967866254114/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:50:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:50:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:50:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:50:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:59.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:50:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:50:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:50:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:59.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:00 np0005548915 python3.9[154987]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:51:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:00 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:00] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:51:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:00] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:51:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003370 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:01.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  6 04:51:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:01.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  6 04:51:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:02 np0005548915 python3.9[155072]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:51:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:02 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:03.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:03.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:04 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:51:05 np0005548915 python3.9[155295]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 04:51:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:05.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:51:05 np0005548915 podman[155503]: 2025-12-06 09:51:05.719432937 +0000 UTC m=+0.023454384 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:51:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:05.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:51:05 np0005548915 podman[155503]: 2025-12-06 09:51:05.898221333 +0000 UTC m=+0.202242750 container create 8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:51:05 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:51:05 np0005548915 systemd[1]: Started libpod-conmon-8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36.scope.
Dec  6 04:51:05 np0005548915 python3.9[155567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:51:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:06 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:51:06 np0005548915 podman[155503]: 2025-12-06 09:51:06.029702202 +0000 UTC m=+0.333723639 container init 8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kalam, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  6 04:51:06 np0005548915 podman[155503]: 2025-12-06 09:51:06.040562825 +0000 UTC m=+0.344584252 container start 8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  6 04:51:06 np0005548915 podman[155503]: 2025-12-06 09:51:06.044687447 +0000 UTC m=+0.348708924 container attach 8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kalam, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:51:06 np0005548915 kind_kalam[155570]: 167 167
Dec  6 04:51:06 np0005548915 systemd[1]: libpod-8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36.scope: Deactivated successfully.
Dec  6 04:51:06 np0005548915 podman[155503]: 2025-12-06 09:51:06.049611129 +0000 UTC m=+0.353633646 container died 8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kalam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:51:06 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a3c7b13b9ca1b9c128a320206c8f9b9e605251561786b03998010b8ec4b1e900-merged.mount: Deactivated successfully.
Dec  6 04:51:06 np0005548915 podman[155503]: 2025-12-06 09:51:06.101392697 +0000 UTC m=+0.405414124 container remove 8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kalam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  6 04:51:06 np0005548915 systemd[1]: libpod-conmon-8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36.scope: Deactivated successfully.
Dec  6 04:51:06 np0005548915 podman[155660]: 2025-12-06 09:51:06.308774955 +0000 UTC m=+0.063381022 container create 3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 04:51:06 np0005548915 systemd[1]: Started libpod-conmon-3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502.scope.
Dec  6 04:51:06 np0005548915 podman[155660]: 2025-12-06 09:51:06.28005199 +0000 UTC m=+0.034658107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:51:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:06 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:06 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:51:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8c308df543cbbdce58190139094515a3cddc2d24222f0b44ff1609b2b547c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:51:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8c308df543cbbdce58190139094515a3cddc2d24222f0b44ff1609b2b547c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:51:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8c308df543cbbdce58190139094515a3cddc2d24222f0b44ff1609b2b547c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:51:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8c308df543cbbdce58190139094515a3cddc2d24222f0b44ff1609b2b547c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:51:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8c308df543cbbdce58190139094515a3cddc2d24222f0b44ff1609b2b547c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:51:06 np0005548915 podman[155660]: 2025-12-06 09:51:06.412128835 +0000 UTC m=+0.166734922 container init 3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lamport, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 04:51:06 np0005548915 podman[155660]: 2025-12-06 09:51:06.420753947 +0000 UTC m=+0.175360034 container start 3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lamport, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 04:51:06 np0005548915 podman[155660]: 2025-12-06 09:51:06.426434511 +0000 UTC m=+0.181040578 container attach 3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lamport, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  6 04:51:06 np0005548915 python3.9[155735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014665.5305347-374-258996016499359/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:51:06 np0005548915 jovial_lamport[155726]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:51:06 np0005548915 jovial_lamport[155726]: --> All data devices are unavailable
Dec  6 04:51:06 np0005548915 systemd[1]: libpod-3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502.scope: Deactivated successfully.
Dec  6 04:51:06 np0005548915 podman[155660]: 2025-12-06 09:51:06.800126567 +0000 UTC m=+0.554732644 container died 3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lamport, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 04:51:06 np0005548915 systemd[1]: var-lib-containers-storage-overlay-5e8c308df543cbbdce58190139094515a3cddc2d24222f0b44ff1609b2b547c1-merged.mount: Deactivated successfully.
Dec  6 04:51:06 np0005548915 podman[155660]: 2025-12-06 09:51:06.853748774 +0000 UTC m=+0.608354841 container remove 3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lamport, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 04:51:06 np0005548915 systemd[1]: libpod-conmon-3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502.scope: Deactivated successfully.
Dec  6 04:51:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:07.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:51:07 np0005548915 python3.9[155955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:51:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003370 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:07 np0005548915 podman[156091]: 2025-12-06 09:51:07.480111601 +0000 UTC m=+0.062712043 container create 26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_pascal, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:51:07 np0005548915 systemd[1]: Started libpod-conmon-26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774.scope.
Dec  6 04:51:07 np0005548915 podman[156091]: 2025-12-06 09:51:07.447154042 +0000 UTC m=+0.029754524 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:51:07 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:51:07 np0005548915 podman[156091]: 2025-12-06 09:51:07.571414956 +0000 UTC m=+0.154015408 container init 26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_pascal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  6 04:51:07 np0005548915 podman[156091]: 2025-12-06 09:51:07.581431056 +0000 UTC m=+0.164031508 container start 26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_pascal, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:51:07 np0005548915 podman[156091]: 2025-12-06 09:51:07.586943195 +0000 UTC m=+0.169543657 container attach 26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_pascal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:51:07 np0005548915 keen_pascal[156134]: 167 167
Dec  6 04:51:07 np0005548915 systemd[1]: libpod-26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774.scope: Deactivated successfully.
Dec  6 04:51:07 np0005548915 conmon[156134]: conmon 26381ce42a8fcc9ab3bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774.scope/container/memory.events
Dec  6 04:51:07 np0005548915 podman[156091]: 2025-12-06 09:51:07.591140089 +0000 UTC m=+0.173740551 container died 26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_pascal, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  6 04:51:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:07 np0005548915 systemd[1]: var-lib-containers-storage-overlay-566a58bf5e9430b627fabe786dce181fa97acdb7e394d43b025ea49de953a262-merged.mount: Deactivated successfully.
Dec  6 04:51:07 np0005548915 podman[156091]: 2025-12-06 09:51:07.639498284 +0000 UTC m=+0.222098706 container remove 26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_pascal, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:51:07 np0005548915 systemd[1]: libpod-conmon-26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774.scope: Deactivated successfully.
Dec  6 04:51:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:07.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:07.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:07 np0005548915 podman[156184]: 2025-12-06 09:51:07.840278493 +0000 UTC m=+0.050857224 container create 2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:51:07 np0005548915 python3.9[156172]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014666.726728-374-19971812704897/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:51:07 np0005548915 systemd[1]: Started libpod-conmon-2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0.scope.
Dec  6 04:51:07 np0005548915 podman[156184]: 2025-12-06 09:51:07.813407278 +0000 UTC m=+0.023986029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:51:07 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:51:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c94461bfb085ae3d9001349034dd322f135b980b10b28389dd771a72c7bc4f6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:51:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c94461bfb085ae3d9001349034dd322f135b980b10b28389dd771a72c7bc4f6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:51:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c94461bfb085ae3d9001349034dd322f135b980b10b28389dd771a72c7bc4f6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:51:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c94461bfb085ae3d9001349034dd322f135b980b10b28389dd771a72c7bc4f6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:51:07 np0005548915 podman[156184]: 2025-12-06 09:51:07.933975112 +0000 UTC m=+0.144553863 container init 2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cerf, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  6 04:51:07 np0005548915 podman[156184]: 2025-12-06 09:51:07.942250605 +0000 UTC m=+0.152829366 container start 2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:51:07 np0005548915 podman[156184]: 2025-12-06 09:51:07.946313505 +0000 UTC m=+0.156892256 container attach 2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cerf, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:51:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:51:08 np0005548915 objective_cerf[156200]: {
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:    "1": [
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:        {
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:            "devices": [
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:                "/dev/loop3"
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:            ],
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:            "lv_name": "ceph_lv0",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:            "lv_size": "21470642176",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:            "name": "ceph_lv0",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:            "tags": {
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:                "ceph.cluster_name": "ceph",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:                "ceph.crush_device_class": "",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:                "ceph.encrypted": "0",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:                "ceph.osd_id": "1",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:                "ceph.type": "block",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:                "ceph.vdo": "0",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:                "ceph.with_tpm": "0"
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:            },
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:            "type": "block",
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:            "vg_name": "ceph_vg0"
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:        }
Dec  6 04:51:08 np0005548915 objective_cerf[156200]:    ]
Dec  6 04:51:08 np0005548915 objective_cerf[156200]: }
Dec  6 04:51:08 np0005548915 systemd[1]: libpod-2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0.scope: Deactivated successfully.
Dec  6 04:51:08 np0005548915 podman[156184]: 2025-12-06 09:51:08.288712958 +0000 UTC m=+0.499291719 container died 2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:51:08 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c94461bfb085ae3d9001349034dd322f135b980b10b28389dd771a72c7bc4f6e-merged.mount: Deactivated successfully.
Dec  6 04:51:08 np0005548915 podman[156184]: 2025-12-06 09:51:08.337144145 +0000 UTC m=+0.547722886 container remove 2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:51:08 np0005548915 systemd[1]: libpod-conmon-2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0.scope: Deactivated successfully.
Dec  6 04:51:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:08 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:08 np0005548915 ovn_controller[152417]: 2025-12-06T09:51:08Z|00025|memory|INFO|16384 kB peak resident set size after 29.5 seconds
Dec  6 04:51:08 np0005548915 ovn_controller[152417]: 2025-12-06T09:51:08Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Dec  6 04:51:08 np0005548915 podman[156240]: 2025-12-06 09:51:08.471929813 +0000 UTC m=+0.111682056 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec  6 04:51:08 np0005548915 podman[156382]: 2025-12-06 09:51:08.907854339 +0000 UTC m=+0.043586507 container create ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:51:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:51:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:51:08 np0005548915 systemd[1]: Started libpod-conmon-ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b.scope.
Dec  6 04:51:08 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:51:08 np0005548915 podman[156382]: 2025-12-06 09:51:08.886219345 +0000 UTC m=+0.021951503 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:51:08 np0005548915 podman[156382]: 2025-12-06 09:51:08.994785526 +0000 UTC m=+0.130517704 container init ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:51:09 np0005548915 podman[156382]: 2025-12-06 09:51:09.006311877 +0000 UTC m=+0.142044005 container start ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moore, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:51:09 np0005548915 podman[156382]: 2025-12-06 09:51:09.009619706 +0000 UTC m=+0.145351834 container attach ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  6 04:51:09 np0005548915 frosty_moore[156427]: 167 167
Dec  6 04:51:09 np0005548915 systemd[1]: libpod-ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b.scope: Deactivated successfully.
Dec  6 04:51:09 np0005548915 podman[156382]: 2025-12-06 09:51:09.017684304 +0000 UTC m=+0.153416432 container died ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:51:09 np0005548915 systemd[1]: var-lib-containers-storage-overlay-702f57b851827087fff9dd6485833481e9ebbb235f4e8071dbc63ec578b2dae2-merged.mount: Deactivated successfully.
Dec  6 04:51:09 np0005548915 podman[156382]: 2025-12-06 09:51:09.056618245 +0000 UTC m=+0.192350373 container remove ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moore, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  6 04:51:09 np0005548915 systemd[1]: libpod-conmon-ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b.scope: Deactivated successfully.
Dec  6 04:51:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:09 np0005548915 podman[156523]: 2025-12-06 09:51:09.238571786 +0000 UTC m=+0.053270659 container create ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:51:09 np0005548915 systemd[1]: Started libpod-conmon-ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096.scope.
Dec  6 04:51:09 np0005548915 podman[156523]: 2025-12-06 09:51:09.214303341 +0000 UTC m=+0.029002234 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:51:09 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:51:09 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0527515ccc4f4f99f33918906bcb9088064fd791da3e6c1e2e52a8bc3c2376/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:51:09 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0527515ccc4f4f99f33918906bcb9088064fd791da3e6c1e2e52a8bc3c2376/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:51:09 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0527515ccc4f4f99f33918906bcb9088064fd791da3e6c1e2e52a8bc3c2376/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:51:09 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0527515ccc4f4f99f33918906bcb9088064fd791da3e6c1e2e52a8bc3c2376/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:51:09 np0005548915 podman[156523]: 2025-12-06 09:51:09.341884875 +0000 UTC m=+0.156583768 container init ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 04:51:09 np0005548915 podman[156523]: 2025-12-06 09:51:09.348791611 +0000 UTC m=+0.163490484 container start ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  6 04:51:09 np0005548915 podman[156523]: 2025-12-06 09:51:09.351952286 +0000 UTC m=+0.166651179 container attach ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 04:51:09 np0005548915 python3.9[156529]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:51:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:51:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  6 04:51:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:09.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  6 04:51:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:09.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:51:09 np0005548915 python3.9[156706]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014668.8785079-506-100980744621150/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:51:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:10 np0005548915 lvm[156768]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:51:10 np0005548915 lvm[156768]: VG ceph_vg0 finished
Dec  6 04:51:10 np0005548915 hungry_jones[156546]: {}
Dec  6 04:51:10 np0005548915 systemd[1]: libpod-ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096.scope: Deactivated successfully.
Dec  6 04:51:10 np0005548915 systemd[1]: libpod-ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096.scope: Consumed 1.328s CPU time.
Dec  6 04:51:10 np0005548915 podman[156523]: 2025-12-06 09:51:10.198114986 +0000 UTC m=+1.012813879 container died ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 04:51:10 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7a0527515ccc4f4f99f33918906bcb9088064fd791da3e6c1e2e52a8bc3c2376-merged.mount: Deactivated successfully.
Dec  6 04:51:10 np0005548915 podman[156523]: 2025-12-06 09:51:10.259468462 +0000 UTC m=+1.074167365 container remove ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Dec  6 04:51:10 np0005548915 systemd[1]: libpod-conmon-ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096.scope: Deactivated successfully.
Dec  6 04:51:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:51:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:51:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:51:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:51:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:10 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:10 np0005548915 python3.9[156924]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:51:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:10] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:51:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:10] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:51:11 np0005548915 python3.9[157052]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014670.1168706-506-162717034982160/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:51:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:11 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:51:11 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:51:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:11.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:51:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:11.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:51:11 np0005548915 python3.9[157203]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:51:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:12 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:12 np0005548915 python3.9[157357]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:51:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:13.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:13 np0005548915 python3.9[157511]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:51:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:51:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:13.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:51:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:14 np0005548915 python3.9[157589]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:51:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:14 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:51:14 np0005548915 python3.9[157741]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:51:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:15 np0005548915 python3.9[157821]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:51:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:15.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:51:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:15.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:51:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:16 np0005548915 python3.9[157973]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:51:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:16 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:17.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:51:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:17.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:51:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:17.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:51:17 np0005548915 python3.9[158126]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:51:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003500 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:17 np0005548915 python3.9[158205]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:51:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:51:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:17.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:51:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:17.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:51:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:51:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:18 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:18 np0005548915 python3.9[158357]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:51:19 np0005548915 python3.9[158435]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:51:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:51:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:19.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:19.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:51:19 np0005548915 python3.9[158589]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:51:19 np0005548915 systemd[1]: Reloading.
Dec  6 04:51:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:20 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:51:20 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:51:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:20 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:20] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:51:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:20] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:51:21 np0005548915 python3.9[158778]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:51:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:21 np0005548915 python3.9[158858]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:51:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:21.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:21.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:22 np0005548915 python3.9[159012]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:51:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:22 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:22 np0005548915 python3.9[159090]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:51:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:23 np0005548915 python3.9[159244]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:51:23 np0005548915 systemd[1]: Reloading.
Dec  6 04:51:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:23 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:51:23 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:51:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:23.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:51:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:51:23
Dec  6 04:51:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:51:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:51:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'default.rgw.control', '.mgr', '.nfs', 'default.rgw.meta', 'vms', 'backups']
Dec  6 04:51:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:51:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:51:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:23.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:51:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:51:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:51:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:51:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:51:23 np0005548915 systemd[1]: Starting Create netns directory...
Dec  6 04:51:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:51:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:51:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:51:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:51:24 np0005548915 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  6 04:51:24 np0005548915 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  6 04:51:24 np0005548915 systemd[1]: Finished Create netns directory.
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:51:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:51:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:24 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:51:24 np0005548915 python3.9[159437]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:51:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:25 np0005548915 python3.9[159591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:51:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:25.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:25.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:26 np0005548915 python3.9[159714]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014685.1067019-959-90830669304176/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:51:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:26 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:27.016Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:51:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:27.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:51:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:27 np0005548915 python3.9[159867]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:51:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:27.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:51:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:27.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:27 np0005548915 python3.9[160045]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:51:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:51:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:28 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:28 np0005548915 python3.9[160168]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014687.4603326-1034-250211857621147/.source.json _original_basename=.b_nkeeh9 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:51:29 np0005548915 python3.9[160320]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:51:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:51:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:51:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:29.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:51:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:29.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:30] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:51:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:30] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:51:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:31 np0005548915 python3.9[160751]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec  6 04:51:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:31.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:31.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:32 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:32 np0005548915 python3.9[160903]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  6 04:51:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:33 np0005548915 python3.9[161057]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  6 04:51:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300022c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:33.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:33.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:51:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:34 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:51:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:35 np0005548915 python3[161237]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  6 04:51:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:35.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:51:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:35.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:36 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300022e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:37.018Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:51:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:37.019Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:51:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:37.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:51:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:37.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:37.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:51:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:38 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:51:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:51:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:51:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:39.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:39.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:51:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:40 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:40] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:51:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:40] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:51:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:41.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:51:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:41.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:51:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:42 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:42 np0005548915 podman[161321]: 2025-12-06 09:51:42.540196612 +0000 UTC m=+3.358865903 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Dec  6 04:51:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:43.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:43.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:44 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:51:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:45 np0005548915 podman[161253]: 2025-12-06 09:51:45.617557307 +0000 UTC m=+10.031679506 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  6 04:51:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:45.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:45 np0005548915 podman[161411]: 2025-12-06 09:51:45.815020327 +0000 UTC m=+0.089804285 container create ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  6 04:51:45 np0005548915 podman[161411]: 2025-12-06 09:51:45.745347846 +0000 UTC m=+0.020131814 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  6 04:51:45 np0005548915 python3[161237]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  6 04:51:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:45.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:46 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:47.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:51:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002360 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:51:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:47.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:51:47 np0005548915 python3.9[161628]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:51:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:47.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:51:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:48 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:48 np0005548915 python3.9[161782]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:51:49 np0005548915 python3.9[161858]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:51:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:51:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:49 np0005548915 python3.9[162011]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014709.150902-1298-182060556066640/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:51:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:49.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:49.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:50 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:50 np0005548915 python3.9[162087]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 04:51:50 np0005548915 systemd[1]: Reloading.
Dec  6 04:51:50 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:51:50 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:51:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:50] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:51:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:50] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  6 04:51:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:51 np0005548915 python3.9[162205]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:51:51 np0005548915 systemd[1]: Reloading.
Dec  6 04:51:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:51 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:51:51 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:51:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:51:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:51.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:51:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:51.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:51 np0005548915 systemd[1]: Starting ovn_metadata_agent container...
Dec  6 04:51:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:52 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:51:52 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f50d8f04ba2ccac2f396f3c4fed03e6d5841af9ca74d7e8187f560f79e9437d8/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec  6 04:51:52 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f50d8f04ba2ccac2f396f3c4fed03e6d5841af9ca74d7e8187f560f79e9437d8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 04:51:52 np0005548915 systemd[1]: Started /usr/bin/podman healthcheck run ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2.
Dec  6 04:51:52 np0005548915 podman[162246]: 2025-12-06 09:51:52.089883229 +0000 UTC m=+0.141371580 container init ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: + sudo -E kolla_set_configs
Dec  6 04:51:52 np0005548915 podman[162246]: 2025-12-06 09:51:52.117248436 +0000 UTC m=+0.168736767 container start ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  6 04:51:52 np0005548915 edpm-start-podman-container[162246]: ovn_metadata_agent
Dec  6 04:51:52 np0005548915 podman[162268]: 2025-12-06 09:51:52.196167903 +0000 UTC m=+0.064095928 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  6 04:51:52 np0005548915 edpm-start-podman-container[162245]: Creating additional drop-in dependency for "ovn_metadata_agent" (ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2)
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: INFO:__main__:Validating config file
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: INFO:__main__:Copying service configuration files
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: INFO:__main__:Writing out command to execute
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /var/lib/neutron
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: ++ cat /run_command
Dec  6 04:51:52 np0005548915 systemd[1]: Reloading.
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: + CMD=neutron-ovn-metadata-agent
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: + ARGS=
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: + sudo kolla_copy_cacerts
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: + [[ ! -n '' ]]
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: + . kolla_extend_start
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: Running command: 'neutron-ovn-metadata-agent'
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: + umask 0022
Dec  6 04:51:52 np0005548915 ovn_metadata_agent[162262]: + exec neutron-ovn-metadata-agent
Dec  6 04:51:52 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:51:52 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:51:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:52 np0005548915 systemd[1]: Started ovn_metadata_agent container.
Dec  6 04:51:52 np0005548915 systemd[1]: session-52.scope: Deactivated successfully.
Dec  6 04:51:52 np0005548915 systemd[1]: session-52.scope: Consumed 57.782s CPU time.
Dec  6 04:51:52 np0005548915 systemd-logind[795]: Session 52 logged out. Waiting for processes to exit.
Dec  6 04:51:52 np0005548915 systemd-logind[795]: Removed session 52.
Dec  6 04:51:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:53.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:53.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:51:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:51:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:51:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:51:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:51:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:51:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:51:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:51:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.189 162267 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.189 162267 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.189 162267 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.222 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.222 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.222 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.230 162267 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.230 162267 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.230 162267 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.230 162267 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.231 162267 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.244 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name d39b5be8-d4cf-41c7-9a64-1ee03801f4e1 (UUID: d39b5be8-d4cf-41c7-9a64-1ee03801f4e1) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.265 162267 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.266 162267 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.266 162267 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.266 162267 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.269 162267 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.275 162267 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.281 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'd39b5be8-d4cf-41c7-9a64-1ee03801f4e1'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], external_ids={}, name=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, nb_cfg_timestamp=1765014646989, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.282 162267 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f70c2851f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.282 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.283 162267 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.283 162267 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.283 162267 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.287 162267 DEBUG oslo_service.service [-] Started child 162380 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.290 162267 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp551o2lw7/privsep.sock']#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.291 162380 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-168927'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.314 162380 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.315 162380 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.315 162380 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.318 162380 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.326 162380 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.332 162380 INFO eventlet.wsgi.server [-] (162380) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec  6 04:51:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:54 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:51:54 np0005548915 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.986 162267 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.987 162267 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp551o2lw7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.855 162385 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.859 162385 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.861 162385 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.861 162385 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162385#033[00m
Dec  6 04:51:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.991 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[faedbc8a-40e9-4699-81b3-e9ab199645c2]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 04:51:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:55 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:55.522 162385 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 04:51:55 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:55.522 162385 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 04:51:55 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:55.522 162385 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 04:51:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:51:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:55.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:51:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:55.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:51:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.049 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[84c5dbab-1250-4578-a479-abb41fe6ac9e]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.052 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, column=external_ids, values=({'neutron:ovn-metadata-id': '765394bf-011d-5efb-b5d8-c10778dc40f3'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.062 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.072 162267 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.072 162267 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.072 162267 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.072 162267 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.072 162267 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.072 162267 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.075 162267 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.076 162267 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.076 162267 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.076 162267 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.076 162267 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.076 162267 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.076 162267 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.082 162267 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.082 162267 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.082 162267 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.082 162267 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.082 162267 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.082 162267 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.082 162267 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.086 162267 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.086 162267 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.086 162267 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.086 162267 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.086 162267 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.086 162267 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.086 162267 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.089 162267 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.089 162267 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.089 162267 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.089 162267 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.089 162267 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.089 162267 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.093 162267 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.093 162267 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.093 162267 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.093 162267 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.093 162267 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.093 162267 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.093 162267 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.094 162267 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.094 162267 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.094 162267 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.094 162267 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.094 162267 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.094 162267 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.094 162267 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.108 162267 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.108 162267 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.108 162267 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.108 162267 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 04:51:56 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.108 162267 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  6 04:51:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:56 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:57.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:51:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:57.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:51:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:57.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:51:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:51:58 np0005548915 systemd-logind[795]: New session 53 of user zuul.
Dec  6 04:51:58 np0005548915 systemd[1]: Started Session 53 of User zuul.
Dec  6 04:51:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:58 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:59 np0005548915 python3.9[162547]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:51:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:51:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:51:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:51:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:59.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:51:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:51:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:51:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:59.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:52:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:52:00 np0005548915 kernel: ganesha.nfsd[149935]: segfault at 50 ip 00007fd0e208032e sp 00007fd095ffa210 error 4 in libntirpc.so.5.8[7fd0e2065000+2c000] likely on CPU 1 (core 0, socket 1)
Dec  6 04:52:00 np0005548915 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  6 04:52:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:52:00 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028002f50 fd 39 proxy ignored for local
Dec  6 04:52:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Dec  6 04:52:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Dec  6 04:52:00 np0005548915 systemd[1]: Started Process Core Dump (PID 162705/UID 0).
Dec  6 04:52:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Dec  6 04:52:00 np0005548915 python3.9[162706]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:52:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:00] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:52:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:00] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:52:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:01.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:01.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:02 np0005548915 python3.9[162875]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 04:52:02 np0005548915 systemd[1]: Reloading.
Dec  6 04:52:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:52:02 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:52:02 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:52:03 np0005548915 systemd-coredump[162707]: Process 126373 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 63:#012#0  0x00007fd0e208032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  6 04:52:03 np0005548915 systemd[1]: systemd-coredump@3-162705-0.service: Deactivated successfully.
Dec  6 04:52:03 np0005548915 systemd[1]: systemd-coredump@3-162705-0.service: Consumed 1.254s CPU time.
Dec  6 04:52:03 np0005548915 podman[162994]: 2025-12-06 09:52:03.344113086 +0000 UTC m=+0.031574012 container died 0680872db78f4539de9816e63fe0e26e1ab0f0389d421d932e29ec3f87531d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  6 04:52:03 np0005548915 systemd[1]: var-lib-containers-storage-overlay-9dffa55875199467cd3d27a66b7cd46e7988a0483df9beb3d1dd985935856704-merged.mount: Deactivated successfully.
Dec  6 04:52:03 np0005548915 podman[162994]: 2025-12-06 09:52:03.399751394 +0000 UTC m=+0.087212300 container remove 0680872db78f4539de9816e63fe0e26e1ab0f0389d421d932e29ec3f87531d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  6 04:52:03 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec  6 04:52:03 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec  6 04:52:03 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.067s CPU time.
Dec  6 04:52:03 np0005548915 python3.9[163110]: ansible-ansible.builtin.service_facts Invoked
Dec  6 04:52:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:03.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:03 np0005548915 network[163127]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 04:52:03 np0005548915 network[163128]: 'network-scripts' will be removed from distribution in near future.
Dec  6 04:52:03 np0005548915 network[163129]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 04:52:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:52:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:03.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:52:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 0 B/s wr, 89 op/s
Dec  6 04:52:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:52:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:52:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:05.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:52:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:52:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:05.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:52:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 0 B/s wr, 89 op/s
Dec  6 04:52:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:07.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:52:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:07.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:52:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:07.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:07.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 0 B/s wr, 159 op/s
Dec  6 04:52:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095208 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:52:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:52:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:52:09 np0005548915 python3.9[163420]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:52:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:52:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:52:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:09.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:52:09 np0005548915 python3.9[163575]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:52:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:52:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:09.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:52:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 0 B/s wr, 159 op/s
Dec  6 04:52:10 np0005548915 python3.9[163728]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:52:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:10] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:52:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:10] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:52:11 np0005548915 python3.9[163949]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:52:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:52:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:52:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:52:11 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:52:11 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:52:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:11.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:52:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:11.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:52:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:52:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 0 B/s wr, 159 op/s
Dec  6 04:52:12 np0005548915 python3.9[164119]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:52:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:52:13 np0005548915 python3.9[164272]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:52:13 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:52:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:52:13 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 4.
Dec  6 04:52:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:52:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:52:13 np0005548915 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:52:13 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.067s CPU time.
Dec  6 04:52:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:52:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:52:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:52:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:52:13 np0005548915 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:52:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:13.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:13.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:13 np0005548915 podman[164524]: 2025-12-06 09:52:13.85077911 +0000 UTC m=+0.026349540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:52:13 np0005548915 python3.9[164441]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:52:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 0 B/s wr, 159 op/s
Dec  6 04:52:14 np0005548915 podman[164524]: 2025-12-06 09:52:14.210867593 +0000 UTC m=+0.386437973 container create 93232cf7a3aa14b498eb360a2c2c9b048fb224223433b0172f5d74ecc111a449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  6 04:52:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626a1b0cf860c2c5b35de77ed4f479b9d5cb19d90798b1515dcfcd9ae27d8ae/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626a1b0cf860c2c5b35de77ed4f479b9d5cb19d90798b1515dcfcd9ae27d8ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626a1b0cf860c2c5b35de77ed4f479b9d5cb19d90798b1515dcfcd9ae27d8ae/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:14 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626a1b0cf860c2c5b35de77ed4f479b9d5cb19d90798b1515dcfcd9ae27d8ae/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:52:14 np0005548915 podman[164524]: 2025-12-06 09:52:14.47021956 +0000 UTC m=+0.645790030 container init 93232cf7a3aa14b498eb360a2c2c9b048fb224223433b0172f5d74ecc111a449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:52:14 np0005548915 podman[164524]: 2025-12-06 09:52:14.4813444 +0000 UTC m=+0.656914820 container start 93232cf7a3aa14b498eb360a2c2c9b048fb224223433b0172f5d74ecc111a449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  6 04:52:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  6 04:52:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  6 04:52:14 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:52:14 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:52:14 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:52:14 np0005548915 bash[164524]: 93232cf7a3aa14b498eb360a2c2c9b048fb224223433b0172f5d74ecc111a449
Dec  6 04:52:14 np0005548915 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:52:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  6 04:52:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  6 04:52:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  6 04:52:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  6 04:52:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  6 04:52:14 np0005548915 podman[164580]: 2025-12-06 09:52:14.588815916 +0000 UTC m=+0.220997056 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec  6 04:52:14 np0005548915 podman[164673]: 2025-12-06 09:52:14.824027842 +0000 UTC m=+0.071149837 container create b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  6 04:52:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:52:14 np0005548915 podman[164673]: 2025-12-06 09:52:14.775096995 +0000 UTC m=+0.022219000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:52:14 np0005548915 systemd[1]: Started libpod-conmon-b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e.scope.
Dec  6 04:52:14 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:52:15 np0005548915 podman[164673]: 2025-12-06 09:52:15.0362272 +0000 UTC m=+0.283349185 container init b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:52:15 np0005548915 podman[164673]: 2025-12-06 09:52:15.045257323 +0000 UTC m=+0.292379288 container start b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_ptolemy, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:52:15 np0005548915 infallible_ptolemy[164691]: 167 167
Dec  6 04:52:15 np0005548915 systemd[1]: libpod-b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e.scope: Deactivated successfully.
Dec  6 04:52:15 np0005548915 podman[164673]: 2025-12-06 09:52:15.054447941 +0000 UTC m=+0.301569916 container attach b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:52:15 np0005548915 podman[164673]: 2025-12-06 09:52:15.055031347 +0000 UTC m=+0.302153322 container died b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_ptolemy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  6 04:52:15 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ea82101dbf0191c4d291058006dbbe72803bfe89c7263f42a7abf8394f88b308-merged.mount: Deactivated successfully.
Dec  6 04:52:15 np0005548915 podman[164673]: 2025-12-06 09:52:15.138140906 +0000 UTC m=+0.385262861 container remove b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_ptolemy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:52:15 np0005548915 systemd[1]: libpod-conmon-b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e.scope: Deactivated successfully.
Dec  6 04:52:15 np0005548915 podman[164715]: 2025-12-06 09:52:15.278085587 +0000 UTC m=+0.023439043 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:52:15 np0005548915 podman[164715]: 2025-12-06 09:52:15.380561218 +0000 UTC m=+0.125914654 container create 0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_dhawan, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:52:15 np0005548915 systemd[1]: Started libpod-conmon-0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd.scope.
Dec  6 04:52:15 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:52:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10171bf79f20345fc2b7f80915a2f18319e0c7619122fa17180caa65b3dc4c50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10171bf79f20345fc2b7f80915a2f18319e0c7619122fa17180caa65b3dc4c50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10171bf79f20345fc2b7f80915a2f18319e0c7619122fa17180caa65b3dc4c50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10171bf79f20345fc2b7f80915a2f18319e0c7619122fa17180caa65b3dc4c50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10171bf79f20345fc2b7f80915a2f18319e0c7619122fa17180caa65b3dc4c50/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:15 np0005548915 podman[164715]: 2025-12-06 09:52:15.69571385 +0000 UTC m=+0.441067336 container init 0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_dhawan, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:52:15 np0005548915 podman[164715]: 2025-12-06 09:52:15.703937602 +0000 UTC m=+0.449291038 container start 0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_dhawan, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  6 04:52:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:15.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:52:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:15.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:52:16 np0005548915 podman[164715]: 2025-12-06 09:52:16.009227287 +0000 UTC m=+0.754580773 container attach 0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  6 04:52:16 np0005548915 cool_dhawan[164732]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:52:16 np0005548915 cool_dhawan[164732]: --> All data devices are unavailable
Dec  6 04:52:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 69 op/s
Dec  6 04:52:16 np0005548915 systemd[1]: libpod-0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd.scope: Deactivated successfully.
Dec  6 04:52:16 np0005548915 podman[164715]: 2025-12-06 09:52:16.057697482 +0000 UTC m=+0.803050918 container died 0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:52:16 np0005548915 systemd[1]: var-lib-containers-storage-overlay-10171bf79f20345fc2b7f80915a2f18319e0c7619122fa17180caa65b3dc4c50-merged.mount: Deactivated successfully.
Dec  6 04:52:16 np0005548915 python3.9[164887]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:52:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:17.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:52:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:17.024Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:52:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:17.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:52:17 np0005548915 python3.9[165040]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:52:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:52:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:17.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:52:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:17.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:17 np0005548915 python3.9[165193]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:52:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 85 B/s wr, 70 op/s
Dec  6 04:52:18 np0005548915 podman[164715]: 2025-12-06 09:52:18.130138451 +0000 UTC m=+2.875491897 container remove 0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_dhawan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  6 04:52:18 np0005548915 systemd[1]: libpod-conmon-0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd.scope: Deactivated successfully.
Dec  6 04:52:18 np0005548915 python3.9[165395]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:52:18 np0005548915 podman[165439]: 2025-12-06 09:52:18.630227835 +0000 UTC m=+0.022048516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:52:19 np0005548915 python3.9[165605]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:52:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  6 04:52:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:19.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  6 04:52:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:52:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:19.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:19 np0005548915 podman[165439]: 2025-12-06 09:52:19.91161898 +0000 UTC m=+1.303439631 container create 4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_leakey, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  6 04:52:19 np0005548915 systemd[1]: Started libpod-conmon-4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846.scope.
Dec  6 04:52:19 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:52:20 np0005548915 python3.9[165758]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:52:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:52:20 np0005548915 podman[165439]: 2025-12-06 09:52:20.380180954 +0000 UTC m=+1.772001625 container init 4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_leakey, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:52:20 np0005548915 podman[165439]: 2025-12-06 09:52:20.386779092 +0000 UTC m=+1.778599743 container start 4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_leakey, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  6 04:52:20 np0005548915 dreamy_leakey[165761]: 167 167
Dec  6 04:52:20 np0005548915 systemd[1]: libpod-4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846.scope: Deactivated successfully.
Dec  6 04:52:20 np0005548915 conmon[165761]: conmon 4f2445c460b9871b26ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846.scope/container/memory.events
Dec  6 04:52:20 np0005548915 python3.9[165923]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:52:20 np0005548915 podman[165439]: 2025-12-06 09:52:20.604243391 +0000 UTC m=+1.996064062 container attach 4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:52:20 np0005548915 podman[165439]: 2025-12-06 09:52:20.606200384 +0000 UTC m=+1.998021055 container died 4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  6 04:52:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:20 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:52:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:20 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:52:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:52:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:52:21 np0005548915 systemd[1]: var-lib-containers-storage-overlay-8915e4e32f1597136a3b017761d1d8df0d6593be96dc5bc5eab2861f78a7a45a-merged.mount: Deactivated successfully.
Dec  6 04:52:21 np0005548915 podman[165439]: 2025-12-06 09:52:21.617167152 +0000 UTC m=+3.008987813 container remove 4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  6 04:52:21 np0005548915 systemd[1]: libpod-conmon-4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846.scope: Deactivated successfully.
Dec  6 04:52:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:21.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:21 np0005548915 podman[165964]: 2025-12-06 09:52:21.854916538 +0000 UTC m=+0.107179978 container create 8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Dec  6 04:52:21 np0005548915 podman[165964]: 2025-12-06 09:52:21.771995435 +0000 UTC m=+0.024258905 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:52:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:21.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:21 np0005548915 systemd[1]: Started libpod-conmon-8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d.scope.
Dec  6 04:52:21 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:52:21 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7908a89d7cc6ea2a3a3a83374f860c7f8781621f4392b6079061d788db3c041/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:21 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7908a89d7cc6ea2a3a3a83374f860c7f8781621f4392b6079061d788db3c041/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:21 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7908a89d7cc6ea2a3a3a83374f860c7f8781621f4392b6079061d788db3c041/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:21 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7908a89d7cc6ea2a3a3a83374f860c7f8781621f4392b6079061d788db3c041/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:22 np0005548915 podman[165964]: 2025-12-06 09:52:22.014422006 +0000 UTC m=+0.266685486 container init 8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_chaum, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:52:22 np0005548915 podman[165964]: 2025-12-06 09:52:22.022250497 +0000 UTC m=+0.274513937 container start 8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  6 04:52:22 np0005548915 podman[165964]: 2025-12-06 09:52:22.025752251 +0000 UTC m=+0.278015741 container attach 8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_chaum, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  6 04:52:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]: {
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:    "1": [
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:        {
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:            "devices": [
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:                "/dev/loop3"
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:            ],
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:            "lv_name": "ceph_lv0",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:            "lv_size": "21470642176",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:            "name": "ceph_lv0",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:            "tags": {
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:                "ceph.cluster_name": "ceph",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:                "ceph.crush_device_class": "",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:                "ceph.encrypted": "0",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:                "ceph.osd_id": "1",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:                "ceph.type": "block",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:                "ceph.vdo": "0",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:                "ceph.with_tpm": "0"
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:            },
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:            "type": "block",
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:            "vg_name": "ceph_vg0"
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:        }
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]:    ]
Dec  6 04:52:22 np0005548915 jolly_chaum[166008]: }
Dec  6 04:52:22 np0005548915 systemd[1]: libpod-8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d.scope: Deactivated successfully.
Dec  6 04:52:22 np0005548915 podman[165964]: 2025-12-06 09:52:22.33561144 +0000 UTC m=+0.587874900 container died 8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:52:22 np0005548915 podman[166115]: 2025-12-06 09:52:22.356591495 +0000 UTC m=+0.100464357 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  6 04:52:22 np0005548915 python3.9[166117]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:52:22 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f7908a89d7cc6ea2a3a3a83374f860c7f8781621f4392b6079061d788db3c041-merged.mount: Deactivated successfully.
Dec  6 04:52:22 np0005548915 podman[165964]: 2025-12-06 09:52:22.626341913 +0000 UTC m=+0.878605353 container remove 8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_chaum, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  6 04:52:22 np0005548915 systemd[1]: libpod-conmon-8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d.scope: Deactivated successfully.
Dec  6 04:52:23 np0005548915 podman[166395]: 2025-12-06 09:52:23.196513545 +0000 UTC m=+0.022479836 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:52:23 np0005548915 python3.9[166394]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:52:23 np0005548915 podman[166395]: 2025-12-06 09:52:23.367079471 +0000 UTC m=+0.193045762 container create 1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_turing, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  6 04:52:23 np0005548915 systemd[1]: Started libpod-conmon-1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81.scope.
Dec  6 04:52:23 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:52:23 np0005548915 podman[166395]: 2025-12-06 09:52:23.450201571 +0000 UTC m=+0.276167862 container init 1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  6 04:52:23 np0005548915 podman[166395]: 2025-12-06 09:52:23.459727677 +0000 UTC m=+0.285693968 container start 1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_turing, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  6 04:52:23 np0005548915 podman[166395]: 2025-12-06 09:52:23.46427045 +0000 UTC m=+0.290236721 container attach 1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_turing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 04:52:23 np0005548915 youthful_turing[166437]: 167 167
Dec  6 04:52:23 np0005548915 systemd[1]: libpod-1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81.scope: Deactivated successfully.
Dec  6 04:52:23 np0005548915 podman[166395]: 2025-12-06 09:52:23.469033328 +0000 UTC m=+0.294999619 container died 1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  6 04:52:23 np0005548915 systemd[1]: var-lib-containers-storage-overlay-5a85d94bfad40f018ee5ad491b22faca1ee45a05965a4fe5e955544494945894-merged.mount: Deactivated successfully.
Dec  6 04:52:23 np0005548915 podman[166395]: 2025-12-06 09:52:23.512069758 +0000 UTC m=+0.338036029 container remove 1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 04:52:23 np0005548915 systemd[1]: libpod-conmon-1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81.scope: Deactivated successfully.
Dec  6 04:52:23 np0005548915 podman[166539]: 2025-12-06 09:52:23.696933379 +0000 UTC m=+0.056462133 container create 8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:52:23 np0005548915 systemd[1]: Started libpod-conmon-8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc.scope.
Dec  6 04:52:23 np0005548915 podman[166539]: 2025-12-06 09:52:23.675615884 +0000 UTC m=+0.035144638 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:52:23 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:52:23 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea94e619fde317ba34fb59ec9485824daeeb4cafa256324773377d2c8366fa82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:23 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea94e619fde317ba34fb59ec9485824daeeb4cafa256324773377d2c8366fa82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:23 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea94e619fde317ba34fb59ec9485824daeeb4cafa256324773377d2c8366fa82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:23 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea94e619fde317ba34fb59ec9485824daeeb4cafa256324773377d2c8366fa82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:52:23 np0005548915 podman[166539]: 2025-12-06 09:52:23.796290115 +0000 UTC m=+0.155818869 container init 8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_feynman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  6 04:52:23 np0005548915 podman[166539]: 2025-12-06 09:52:23.804735683 +0000 UTC m=+0.164264437 container start 8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_feynman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:52:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:52:23
Dec  6 04:52:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:52:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:52:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', '.nfs', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images']
Dec  6 04:52:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:52:23 np0005548915 podman[166539]: 2025-12-06 09:52:23.809012018 +0000 UTC m=+0.168540772 container attach 8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_feynman, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  6 04:52:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:23.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:52:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:52:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:23.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:52:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:52:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:52:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:52:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:52:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:52:24 np0005548915 python3.9[166607]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:52:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:52:24 np0005548915 lvm[166830]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:52:24 np0005548915 lvm[166830]: VG ceph_vg0 finished
Dec  6 04:52:24 np0005548915 pensive_feynman[166604]: {}
Dec  6 04:52:24 np0005548915 systemd[1]: libpod-8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc.scope: Deactivated successfully.
Dec  6 04:52:24 np0005548915 systemd[1]: libpod-8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc.scope: Consumed 1.103s CPU time.
Dec  6 04:52:24 np0005548915 podman[166539]: 2025-12-06 09:52:24.56956768 +0000 UTC m=+0.929096414 container died 8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_feynman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:52:24 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ea94e619fde317ba34fb59ec9485824daeeb4cafa256324773377d2c8366fa82-merged.mount: Deactivated successfully.
Dec  6 04:52:24 np0005548915 podman[166539]: 2025-12-06 09:52:24.61557486 +0000 UTC m=+0.975103604 container remove 8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:52:24 np0005548915 python3.9[166831]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:52:24 np0005548915 systemd[1]: libpod-conmon-8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc.scope: Deactivated successfully.
Dec  6 04:52:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:52:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:52:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:52:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:52:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:52:24 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:52:24 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:52:25 np0005548915 python3.9[167022]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:52:25 np0005548915 python3.9[167175]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:52:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:25.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:25.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:52:26 np0005548915 python3.9[167327]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  6 04:52:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 04:52:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:27.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:52:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:27.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:52:27 np0005548915 python3.9[167495]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:52:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:27 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd604000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:27 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:27.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:27.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 04:52:28 np0005548915 python3.9[167673]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  6 04:52:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:28 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8001550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:29 np0005548915 python3.9[167825]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 04:52:29 np0005548915 systemd[1]: Reloading.
Dec  6 04:52:29 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:52:29 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:52:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:29 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0011d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:29 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5e4000f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:52:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:29.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:52:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:52:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:29.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:52:30 np0005548915 python3.9[168015]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:52:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095230 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:52:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:30 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:30] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  6 04:52:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:30] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  6 04:52:30 np0005548915 python3.9[168168]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:52:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:31 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:31 np0005548915 python3.9[168322]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:52:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:31 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec001d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:31.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:31.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:52:32 np0005548915 python3.9[168476]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:52:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:32 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5e4001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:32 np0005548915 python3.9[168629]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:52:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:33 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:33 np0005548915 python3.9[168783]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:52:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:33 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:33.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:33.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:52:34 np0005548915 python3.9[168937]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:52:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:34 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec001d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:52:35 np0005548915 python3.9[169091]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  6 04:52:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:35 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5e4001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:35 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:35.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:35.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:52:36 np0005548915 python3.9[169245]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  6 04:52:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:36 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:37.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:52:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:37.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:52:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:37 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec001d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:37 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5e4001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:37 np0005548915 python3.9[169405]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  6 04:52:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:37 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 04:52:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:37.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:37 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 04:52:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:52:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:37.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:52:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:52:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:38 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:52:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:52:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:39 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:39 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:52:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:39.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:52:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:52:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:39.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:52:40 np0005548915 python3.9[169568]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:52:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:40 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5e4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:40] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:52:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:40] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:52:41 np0005548915 python3.9[169652]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:52:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:41 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:41 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:41.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:41.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:52:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:42 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:43 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5e4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:43 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:43.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:43.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:52:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:44 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:52:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:45 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:45 np0005548915 podman[169666]: 2025-12-06 09:52:45.488097836 +0000 UTC m=+0.109867862 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec  6 04:52:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:45 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:52:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:45.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:52:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:45.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:52:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:46 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:47.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:52:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:47 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:47 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5e4003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:47.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:47.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:52:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:48 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:49 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:49 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:52:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:49.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:52:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:52:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:49.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:52:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:50 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:50] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:52:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:50] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:52:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:51 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:51 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:51.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:51.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:52:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:52 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:53 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:53 np0005548915 podman[169729]: 2025-12-06 09:52:53.424884983 +0000 UTC m=+0.061313795 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Dec  6 04:52:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:53 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:53.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:52:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:52:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:53.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:52:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:52:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:52:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:52:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:52:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:52:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:52:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:52:54.223 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 04:52:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:52:54.224 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 04:52:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:52:54.225 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 04:52:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:54 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:52:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:55 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:55 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:55.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:55.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:52:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:56 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:57.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:52:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:57.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:52:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:57 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:57 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:52:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:57.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:52:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:52:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:57.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:52:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:52:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:58 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:59 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:59 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:52:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:52:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:59.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:52:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:52:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:52:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:52:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:59.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:53:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:00 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:00] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  6 04:53:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:00] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  6 04:53:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:01 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:01 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:53:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:01.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:53:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:01.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:53:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:02 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:03 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:03 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:03.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:03.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:53:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:04 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:53:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:05 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:05 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:53:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:05.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:53:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:05.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:53:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:06 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:07.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:53:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:07.032Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:53:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:07 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:07 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:07.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:07.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:53:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:08 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:53:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:53:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:09 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:09 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:09.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:53:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:09.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:53:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:10 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:10] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:53:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:10] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:53:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:11 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:11 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:11.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:11.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:53:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:12 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f80043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:13 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:13 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:13.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:13.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:53:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:53:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:15 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f80043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:15 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:15.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:15.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:53:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:16 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f80043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:16 np0005548915 podman[169975]: 2025-12-06 09:53:16.472868697 +0000 UTC m=+0.100655562 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  6 04:53:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:17.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:53:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:17.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:53:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:17 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:17 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:17.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:17.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:53:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:18 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc003c10 fd 38 proxy ignored for local
Dec  6 04:53:18 np0005548915 kernel: ganesha.nfsd[169753]: segfault at 50 ip 00007fd6acef232e sp 00007fd66dffa210 error 4 in libntirpc.so.5.8[7fd6aced7000+2c000] likely on CPU 2 (core 0, socket 2)
Dec  6 04:53:18 np0005548915 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  6 04:53:18 np0005548915 systemd[1]: Started Process Core Dump (PID 170004/UID 0).
Dec  6 04:53:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:53:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:53:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:19.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:53:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:19.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:53:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:20] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:53:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:20] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:53:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:21.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:21.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:53:22 np0005548915 systemd-coredump[170005]: Process 164594 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 55:#012#0  0x00007fd6acef232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  6 04:53:22 np0005548915 systemd[1]: systemd-coredump@4-170004-0.service: Deactivated successfully.
Dec  6 04:53:22 np0005548915 systemd[1]: systemd-coredump@4-170004-0.service: Consumed 1.193s CPU time.
Dec  6 04:53:22 np0005548915 podman[170014]: 2025-12-06 09:53:22.539088182 +0000 UTC m=+0.031405049 container died 93232cf7a3aa14b498eb360a2c2c9b048fb224223433b0172f5d74ecc111a449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 04:53:22 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7626a1b0cf860c2c5b35de77ed4f479b9d5cb19d90798b1515dcfcd9ae27d8ae-merged.mount: Deactivated successfully.
Dec  6 04:53:22 np0005548915 podman[170014]: 2025-12-06 09:53:22.585940856 +0000 UTC m=+0.078257703 container remove 93232cf7a3aa14b498eb360a2c2c9b048fb224223433b0172f5d74ecc111a449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  6 04:53:22 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec  6 04:53:22 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec  6 04:53:22 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.624s CPU time.
Dec  6 04:53:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:53:23
Dec  6 04:53:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:53:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:53:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['volumes', '.rgw.root', '.nfs', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'backups']
Dec  6 04:53:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:53:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:53:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:53:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:23.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:53:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:53:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:23.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:53:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:53:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:53:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:53:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:53:24 np0005548915 podman[170060]: 2025-12-06 09:53:24.436684495 +0000 UTC m=+0.066131208 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 04:53:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:53:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:53:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:53:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:53:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:53:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:53:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:53:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:53:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:53:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:53:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:53:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:53:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:53:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:53:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:53:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:53:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:25.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:53:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:25.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:26 np0005548915 kernel: SELinux:  Converting 2776 SID table entries...
Dec  6 04:53:26 np0005548915 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 04:53:26 np0005548915 kernel: SELinux:  policy capability open_perms=1
Dec  6 04:53:26 np0005548915 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 04:53:26 np0005548915 kernel: SELinux:  policy capability always_check_network=0
Dec  6 04:53:26 np0005548915 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 04:53:26 np0005548915 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 04:53:26 np0005548915 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 04:53:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:53:26 np0005548915 podman[170260]: 2025-12-06 09:53:26.440090841 +0000 UTC m=+0.064993766 container create c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chebyshev, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:53:26 np0005548915 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec  6 04:53:26 np0005548915 podman[170260]: 2025-12-06 09:53:26.399099699 +0000 UTC m=+0.024002614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:53:26 np0005548915 systemd[1]: Started libpod-conmon-c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782.scope.
Dec  6 04:53:26 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:53:26 np0005548915 podman[170260]: 2025-12-06 09:53:26.570107683 +0000 UTC m=+0.195010658 container init c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:53:26 np0005548915 podman[170260]: 2025-12-06 09:53:26.579417311 +0000 UTC m=+0.204320236 container start c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chebyshev, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:53:26 np0005548915 nice_chebyshev[170276]: 167 167
Dec  6 04:53:26 np0005548915 systemd[1]: libpod-c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782.scope: Deactivated successfully.
Dec  6 04:53:26 np0005548915 conmon[170276]: conmon c3b19d3bcac140511b3e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782.scope/container/memory.events
Dec  6 04:53:26 np0005548915 podman[170260]: 2025-12-06 09:53:26.585351515 +0000 UTC m=+0.210254480 container attach c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  6 04:53:26 np0005548915 podman[170260]: 2025-12-06 09:53:26.588167032 +0000 UTC m=+0.213069957 container died c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chebyshev, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:53:26 np0005548915 systemd[1]: var-lib-containers-storage-overlay-da2d3339044fe5a5d0fe5bf36f3553f698d939baa721771d79bb41633846890d-merged.mount: Deactivated successfully.
Dec  6 04:53:26 np0005548915 podman[170260]: 2025-12-06 09:53:26.648793898 +0000 UTC m=+0.273696823 container remove c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chebyshev, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  6 04:53:26 np0005548915 systemd[1]: libpod-conmon-c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782.scope: Deactivated successfully.
Dec  6 04:53:26 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:53:26 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:53:26 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:53:26 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:53:26 np0005548915 podman[170302]: 2025-12-06 09:53:26.842512569 +0000 UTC m=+0.055630788 container create c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 04:53:26 np0005548915 systemd[1]: Started libpod-conmon-c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3.scope.
Dec  6 04:53:26 np0005548915 podman[170302]: 2025-12-06 09:53:26.812787838 +0000 UTC m=+0.025906067 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:53:26 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:53:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d8106c8937bf5436b8ba2ea25253c34399237692f4156d8712910048ee5ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d8106c8937bf5436b8ba2ea25253c34399237692f4156d8712910048ee5ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d8106c8937bf5436b8ba2ea25253c34399237692f4156d8712910048ee5ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d8106c8937bf5436b8ba2ea25253c34399237692f4156d8712910048ee5ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d8106c8937bf5436b8ba2ea25253c34399237692f4156d8712910048ee5ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:26 np0005548915 podman[170302]: 2025-12-06 09:53:26.945654118 +0000 UTC m=+0.158772317 container init c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_germain, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:53:26 np0005548915 podman[170302]: 2025-12-06 09:53:26.954127933 +0000 UTC m=+0.167246162 container start c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_germain, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  6 04:53:26 np0005548915 podman[170302]: 2025-12-06 09:53:26.958994096 +0000 UTC m=+0.172112275 container attach c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:53:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:27.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:53:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:27.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:53:27 np0005548915 youthful_germain[170319]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:53:27 np0005548915 youthful_germain[170319]: --> All data devices are unavailable
Dec  6 04:53:27 np0005548915 systemd[1]: libpod-c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3.scope: Deactivated successfully.
Dec  6 04:53:27 np0005548915 podman[170302]: 2025-12-06 09:53:27.293396015 +0000 UTC m=+0.506514234 container died c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec  6 04:53:27 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a80d8106c8937bf5436b8ba2ea25253c34399237692f4156d8712910048ee5ce-merged.mount: Deactivated successfully.
Dec  6 04:53:27 np0005548915 podman[170302]: 2025-12-06 09:53:27.366032312 +0000 UTC m=+0.579150531 container remove c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:53:27 np0005548915 systemd[1]: libpod-conmon-c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3.scope: Deactivated successfully.
Dec  6 04:53:27 np0005548915 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec  6 04:53:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:27.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:27.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:28 np0005548915 podman[170467]: 2025-12-06 09:53:28.061508285 +0000 UTC m=+0.057842169 container create e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec  6 04:53:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 04:53:28 np0005548915 systemd[1]: Started libpod-conmon-e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44.scope.
Dec  6 04:53:28 np0005548915 podman[170467]: 2025-12-06 09:53:28.035218188 +0000 UTC m=+0.031552102 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:53:28 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:53:28 np0005548915 podman[170467]: 2025-12-06 09:53:28.163122252 +0000 UTC m=+0.159456176 container init e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:53:28 np0005548915 podman[170467]: 2025-12-06 09:53:28.170703452 +0000 UTC m=+0.167037326 container start e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_driscoll, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  6 04:53:28 np0005548915 podman[170467]: 2025-12-06 09:53:28.173860269 +0000 UTC m=+0.170194143 container attach e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:53:28 np0005548915 xenodochial_driscoll[170483]: 167 167
Dec  6 04:53:28 np0005548915 systemd[1]: libpod-e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44.scope: Deactivated successfully.
Dec  6 04:53:28 np0005548915 conmon[170483]: conmon e48b432e6e6b0f02b171 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44.scope/container/memory.events
Dec  6 04:53:28 np0005548915 podman[170467]: 2025-12-06 09:53:28.178668882 +0000 UTC m=+0.175002806 container died e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_driscoll, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  6 04:53:28 np0005548915 systemd[1]: var-lib-containers-storage-overlay-0c6d659bf0583dac05bfe647bfd55d28bcc1b0f9a3252f86cfb8ebfa22e4b1fe-merged.mount: Deactivated successfully.
Dec  6 04:53:28 np0005548915 podman[170467]: 2025-12-06 09:53:28.219278923 +0000 UTC m=+0.215612797 container remove e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_driscoll, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  6 04:53:28 np0005548915 systemd[1]: libpod-conmon-e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44.scope: Deactivated successfully.
Dec  6 04:53:28 np0005548915 podman[170506]: 2025-12-06 09:53:28.382613145 +0000 UTC m=+0.042224877 container create 7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shirley, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  6 04:53:28 np0005548915 systemd[1]: Started libpod-conmon-7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7.scope.
Dec  6 04:53:28 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:53:28 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f24cfc21400835085a7f61dd479ec667dd968c22d8cd1acf5cecd661b7f736/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:28 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f24cfc21400835085a7f61dd479ec667dd968c22d8cd1acf5cecd661b7f736/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:28 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f24cfc21400835085a7f61dd479ec667dd968c22d8cd1acf5cecd661b7f736/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:28 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f24cfc21400835085a7f61dd479ec667dd968c22d8cd1acf5cecd661b7f736/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:28 np0005548915 podman[170506]: 2025-12-06 09:53:28.365894573 +0000 UTC m=+0.025506325 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:53:28 np0005548915 podman[170506]: 2025-12-06 09:53:28.47217645 +0000 UTC m=+0.131788202 container init 7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 04:53:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095328 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:53:28 np0005548915 podman[170506]: 2025-12-06 09:53:28.48015379 +0000 UTC m=+0.139765522 container start 7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shirley, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  6 04:53:28 np0005548915 podman[170506]: 2025-12-06 09:53:28.483927945 +0000 UTC m=+0.143539697 container attach 7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]: {
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:    "1": [
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:        {
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:            "devices": [
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:                "/dev/loop3"
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:            ],
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:            "lv_name": "ceph_lv0",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:            "lv_size": "21470642176",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:            "name": "ceph_lv0",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:            "tags": {
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:                "ceph.cluster_name": "ceph",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:                "ceph.crush_device_class": "",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:                "ceph.encrypted": "0",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:                "ceph.osd_id": "1",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:                "ceph.type": "block",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:                "ceph.vdo": "0",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:                "ceph.with_tpm": "0"
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:            },
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:            "type": "block",
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:            "vg_name": "ceph_vg0"
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:        }
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]:    ]
Dec  6 04:53:28 np0005548915 sharp_shirley[170522]: }
Dec  6 04:53:28 np0005548915 systemd[1]: libpod-7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7.scope: Deactivated successfully.
Dec  6 04:53:28 np0005548915 podman[170506]: 2025-12-06 09:53:28.819426253 +0000 UTC m=+0.479037995 container died 7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shirley, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  6 04:53:28 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c8f24cfc21400835085a7f61dd479ec667dd968c22d8cd1acf5cecd661b7f736-merged.mount: Deactivated successfully.
Dec  6 04:53:28 np0005548915 podman[170506]: 2025-12-06 09:53:28.855604122 +0000 UTC m=+0.515215854 container remove 7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shirley, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  6 04:53:28 np0005548915 systemd[1]: libpod-conmon-7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7.scope: Deactivated successfully.
Dec  6 04:53:29 np0005548915 podman[170633]: 2025-12-06 09:53:29.40245997 +0000 UTC m=+0.046717441 container create 044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_davinci, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  6 04:53:29 np0005548915 systemd[1]: Started libpod-conmon-044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb.scope.
Dec  6 04:53:29 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:53:29 np0005548915 podman[170633]: 2025-12-06 09:53:29.382638122 +0000 UTC m=+0.026895643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:53:29 np0005548915 podman[170633]: 2025-12-06 09:53:29.490270156 +0000 UTC m=+0.134527677 container init 044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_davinci, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 04:53:29 np0005548915 podman[170633]: 2025-12-06 09:53:29.496180109 +0000 UTC m=+0.140437590 container start 044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_davinci, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:53:29 np0005548915 podman[170633]: 2025-12-06 09:53:29.499634624 +0000 UTC m=+0.143892105 container attach 044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_davinci, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Dec  6 04:53:29 np0005548915 upbeat_davinci[170650]: 167 167
Dec  6 04:53:29 np0005548915 systemd[1]: libpod-044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb.scope: Deactivated successfully.
Dec  6 04:53:29 np0005548915 podman[170633]: 2025-12-06 09:53:29.502124203 +0000 UTC m=+0.146381694 container died 044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  6 04:53:29 np0005548915 systemd[1]: var-lib-containers-storage-overlay-3235420f1c98a6f78b3461437c474a6421ce700d46ca95913e631bd56c769450-merged.mount: Deactivated successfully.
Dec  6 04:53:29 np0005548915 podman[170633]: 2025-12-06 09:53:29.54108168 +0000 UTC m=+0.185339161 container remove 044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_davinci, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 04:53:29 np0005548915 systemd[1]: libpod-conmon-044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb.scope: Deactivated successfully.
Dec  6 04:53:29 np0005548915 podman[170676]: 2025-12-06 09:53:29.714633924 +0000 UTC m=+0.044868690 container create 2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  6 04:53:29 np0005548915 systemd[1]: Started libpod-conmon-2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a.scope.
Dec  6 04:53:29 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:53:29 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b58048860b62f05eabcd553933826b49a717f3a6347aee5d25e31a6ce13c858/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:29 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b58048860b62f05eabcd553933826b49a717f3a6347aee5d25e31a6ce13c858/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:29 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b58048860b62f05eabcd553933826b49a717f3a6347aee5d25e31a6ce13c858/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:29 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b58048860b62f05eabcd553933826b49a717f3a6347aee5d25e31a6ce13c858/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:29 np0005548915 podman[170676]: 2025-12-06 09:53:29.78869939 +0000 UTC m=+0.118934176 container init 2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:53:29 np0005548915 podman[170676]: 2025-12-06 09:53:29.694778636 +0000 UTC m=+0.025013452 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:53:29 np0005548915 podman[170676]: 2025-12-06 09:53:29.796847705 +0000 UTC m=+0.127082471 container start 2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  6 04:53:29 np0005548915 podman[170676]: 2025-12-06 09:53:29.800593409 +0000 UTC m=+0.130828175 container attach 2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 04:53:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:53:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:29.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:29.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:53:30 np0005548915 lvm[170767]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:53:30 np0005548915 lvm[170767]: VG ceph_vg0 finished
Dec  6 04:53:30 np0005548915 priceless_ritchie[170693]: {}
Dec  6 04:53:30 np0005548915 systemd[1]: libpod-2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a.scope: Deactivated successfully.
Dec  6 04:53:30 np0005548915 systemd[1]: libpod-2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a.scope: Consumed 1.010s CPU time.
Dec  6 04:53:30 np0005548915 podman[170676]: 2025-12-06 09:53:30.42357727 +0000 UTC m=+0.753812046 container died 2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  6 04:53:30 np0005548915 systemd[1]: var-lib-containers-storage-overlay-5b58048860b62f05eabcd553933826b49a717f3a6347aee5d25e31a6ce13c858-merged.mount: Deactivated successfully.
Dec  6 04:53:30 np0005548915 podman[170676]: 2025-12-06 09:53:30.473586811 +0000 UTC m=+0.803821577 container remove 2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:53:30 np0005548915 systemd[1]: libpod-conmon-2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a.scope: Deactivated successfully.
Dec  6 04:53:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:53:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:53:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:53:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:53:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 04:53:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 04:53:31 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:53:31 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:53:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:31.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:53:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:31.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:53:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:53:32 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 5.
Dec  6 04:53:32 np0005548915 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:53:32 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.624s CPU time.
Dec  6 04:53:32 np0005548915 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:53:33 np0005548915 podman[170862]: 2025-12-06 09:53:33.14311928 +0000 UTC m=+0.070818738 container create c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:53:33 np0005548915 podman[170862]: 2025-12-06 09:53:33.110219781 +0000 UTC m=+0.037919309 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:53:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f99abc96062c26417ae3d5e3044f6541c1d626500d6a12b4f0ec41d1199e93/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f99abc96062c26417ae3d5e3044f6541c1d626500d6a12b4f0ec41d1199e93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f99abc96062c26417ae3d5e3044f6541c1d626500d6a12b4f0ec41d1199e93/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f99abc96062c26417ae3d5e3044f6541c1d626500d6a12b4f0ec41d1199e93/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:53:33 np0005548915 podman[170862]: 2025-12-06 09:53:33.232628223 +0000 UTC m=+0.160327741 container init c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:53:33 np0005548915 podman[170862]: 2025-12-06 09:53:33.241138658 +0000 UTC m=+0.168838126 container start c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:53:33 np0005548915 bash[170862]: c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732
Dec  6 04:53:33 np0005548915 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:53:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  6 04:53:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  6 04:53:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  6 04:53:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  6 04:53:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  6 04:53:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  6 04:53:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  6 04:53:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:53:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:33.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:33.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:53:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:53:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:35.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:35.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:53:36 np0005548915 kernel: SELinux:  Converting 2776 SID table entries...
Dec  6 04:53:36 np0005548915 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 04:53:36 np0005548915 kernel: SELinux:  policy capability open_perms=1
Dec  6 04:53:36 np0005548915 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 04:53:36 np0005548915 kernel: SELinux:  policy capability always_check_network=0
Dec  6 04:53:36 np0005548915 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 04:53:36 np0005548915 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 04:53:36 np0005548915 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 04:53:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:37.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:53:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:37.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:37.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  6 04:53:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:53:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:53:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:39 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:53:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:39 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:53:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:53:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:39.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:39.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec  6 04:53:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:40] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:53:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:40] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:53:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:41.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:42.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec  6 04:53:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:43.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:44.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 04:53:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:45.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:46.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 04:53:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:46 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:47.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:53:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:47.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:53:47 np0005548915 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec  6 04:53:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:47 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:47 np0005548915 podman[170956]: 2025-12-06 09:53:47.522283999 +0000 UTC m=+0.120726106 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec  6 04:53:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:47 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:47.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:53:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:48.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:53:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 04:53:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095348 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:53:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:48 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c34001140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:49 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:49 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:53:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 04:53:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:49.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 04:53:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:50.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 04:53:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:50 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:50] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:53:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:50] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:53:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:51 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c34001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:51 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:51.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:52.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 04:53:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:52 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:53 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:53 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c34001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:53:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:53:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:53:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:53:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:53.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:53:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:53:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:53:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:53:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:54.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 04:53:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:53:54.224 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 04:53:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:53:54.225 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 04:53:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:53:54.225 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 04:53:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:54 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:53:55 np0005548915 podman[175306]: 2025-12-06 09:53:55.455337327 +0000 UTC m=+0.081370809 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  6 04:53:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:55 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:55 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:55.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:53:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:56.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:53:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  6 04:53:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:56 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c34001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:57.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:53:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:57 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c34001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:57 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:57.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:58.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:53:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  6 04:53:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:58 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:59 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c34001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=sqlstore.transactions t=2025-12-06T09:53:59.577024905Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec  6 04:53:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=cleanup t=2025-12-06T09:53:59.615954538Z level=info msg="Completed cleanup jobs" duration=52.741907ms
Dec  6 04:53:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafana.update.checker t=2025-12-06T09:53:59.71028265Z level=info msg="Update check succeeded" duration=45.174332ms
Dec  6 04:53:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=plugins.update.checker t=2025-12-06T09:53:59.714168366Z level=info msg="Update check succeeded" duration=87.979992ms
Dec  6 04:53:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:59 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:53:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:53:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:53:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:53:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:59.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:00.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:54:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:00 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:00] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  6 04:54:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:00] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  6 04:54:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:01 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:01 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:01.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:54:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:02.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:54:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:54:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:02 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:03 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:03 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  6 04:54:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:03.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  6 04:54:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:04.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 04:54:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:04 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:54:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:05 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:05 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:05.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:06.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:54:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:06 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:07.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:54:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:07.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:54:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:07 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:07 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:07.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:08.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 04:54:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:08 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:54:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:54:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:09 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:09 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:54:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:10.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:10.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:54:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:10 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:10] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  6 04:54:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:10] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  6 04:54:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:11 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:11 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:54:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:12.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:54:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:12.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:54:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:12 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:13 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:13 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:14.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:14.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 04:54:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:14 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:54:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:15 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:15 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:16.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:16.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:54:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:16 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:17.045Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:54:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:17.045Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:54:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:17.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:54:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:17 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:17 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:18.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:18.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 04:54:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:18 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c44000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:18 np0005548915 podman[186895]: 2025-12-06 09:54:18.549597905 +0000 UTC m=+0.143866444 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 04:54:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:19 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:19 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:54:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:20.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:20.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:54:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:20 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:20] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  6 04:54:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:20] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  6 04:54:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:21 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c44001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:21 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:22.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:22.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:54:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:22 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:23 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:23 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:54:23
Dec  6 04:54:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:54:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:54:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'backups', '.nfs', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'vms', 'images', 'volumes', 'cephfs.cephfs.meta', '.mgr']
Dec  6 04:54:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:54:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:54:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:54:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:54:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:54:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:54:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:54:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:54:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:54:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:24.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:24.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:54:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:54:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:24 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:54:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095425 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:54:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:25 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:25 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:26.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:26.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:54:26 np0005548915 podman[187926]: 2025-12-06 09:54:26.463217607 +0000 UTC m=+0.066166321 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec  6 04:54:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:26 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:27.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:54:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:27.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:54:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:27.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:54:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:27 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:27 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:28.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:54:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:28.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:54:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:54:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:28 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:29 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:29 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:54:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:30.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:30.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:54:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:30 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 04:54:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 04:54:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:31 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  6 04:54:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 04:54:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:31 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:32.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:54:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:32.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:54:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:54:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:32 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 04:54:33 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 04:54:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 04:54:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:33 np0005548915 kernel: SELinux:  Converting 2777 SID table entries...
Dec  6 04:54:33 np0005548915 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 04:54:33 np0005548915 kernel: SELinux:  policy capability open_perms=1
Dec  6 04:54:33 np0005548915 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 04:54:33 np0005548915 kernel: SELinux:  policy capability always_check_network=0
Dec  6 04:54:33 np0005548915 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 04:54:33 np0005548915 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 04:54:33 np0005548915 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 04:54:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:54:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:34.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:54:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:34.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:54:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:54:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  6 04:54:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 04:54:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:34 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:34 np0005548915 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  6 04:54:34 np0005548915 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec  6 04:54:34 np0005548915 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  6 04:54:34 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:34 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:34 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 04:54:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:35 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:35 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:54:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:36.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:36.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:54:36 np0005548915 podman[188211]: 2025-12-06 09:54:36.509027268 +0000 UTC m=+0.071291690 container create f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:54:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:36 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:36 np0005548915 podman[188211]: 2025-12-06 09:54:36.469990881 +0000 UTC m=+0.032255343 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:54:36 np0005548915 systemd[1]: Started libpod-conmon-f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11.scope.
Dec  6 04:54:36 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:54:36 np0005548915 podman[188211]: 2025-12-06 09:54:36.618455939 +0000 UTC m=+0.180720391 container init f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:54:36 np0005548915 podman[188211]: 2025-12-06 09:54:36.626501617 +0000 UTC m=+0.188766029 container start f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  6 04:54:36 np0005548915 podman[188211]: 2025-12-06 09:54:36.630209358 +0000 UTC m=+0.192473830 container attach f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  6 04:54:36 np0005548915 agitated_kare[188226]: 167 167
Dec  6 04:54:36 np0005548915 systemd[1]: libpod-f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11.scope: Deactivated successfully.
Dec  6 04:54:36 np0005548915 podman[188211]: 2025-12-06 09:54:36.635142981 +0000 UTC m=+0.197407363 container died f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:54:36 np0005548915 systemd[1]: var-lib-containers-storage-overlay-b4a73f4c674f06cc2a45b36d7983b25964f762b956cd014dba73e49a7079d52f-merged.mount: Deactivated successfully.
Dec  6 04:54:36 np0005548915 podman[188211]: 2025-12-06 09:54:36.709723119 +0000 UTC m=+0.271987511 container remove f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  6 04:54:36 np0005548915 systemd[1]: libpod-conmon-f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11.scope: Deactivated successfully.
Dec  6 04:54:36 np0005548915 podman[188256]: 2025-12-06 09:54:36.921809579 +0000 UTC m=+0.056351017 container create 3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:54:36 np0005548915 systemd[1]: Started libpod-conmon-3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550.scope.
Dec  6 04:54:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:36 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:54:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:36 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:54:36 np0005548915 podman[188256]: 2025-12-06 09:54:36.900895152 +0000 UTC m=+0.035436610 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:54:37 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:54:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2d0055afa0071b483d36b7722aa775a7c3bb47d6521257f96309c63f95e960/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:54:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2d0055afa0071b483d36b7722aa775a7c3bb47d6521257f96309c63f95e960/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:54:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2d0055afa0071b483d36b7722aa775a7c3bb47d6521257f96309c63f95e960/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:54:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2d0055afa0071b483d36b7722aa775a7c3bb47d6521257f96309c63f95e960/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:54:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2d0055afa0071b483d36b7722aa775a7c3bb47d6521257f96309c63f95e960/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:54:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:37.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:54:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:37.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:54:37 np0005548915 podman[188256]: 2025-12-06 09:54:37.062096005 +0000 UTC m=+0.196637473 container init 3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:54:37 np0005548915 podman[188256]: 2025-12-06 09:54:37.073212586 +0000 UTC m=+0.207754024 container start 3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  6 04:54:37 np0005548915 podman[188256]: 2025-12-06 09:54:37.134792472 +0000 UTC m=+0.269333940 container attach 3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:54:37 np0005548915 sweet_diffie[188277]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:54:37 np0005548915 sweet_diffie[188277]: --> All data devices are unavailable
Dec  6 04:54:37 np0005548915 systemd[1]: libpod-3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550.scope: Deactivated successfully.
Dec  6 04:54:37 np0005548915 podman[188256]: 2025-12-06 09:54:37.498021871 +0000 UTC m=+0.632563329 container died 3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:54:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:37 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:37 np0005548915 systemd[1]: var-lib-containers-storage-overlay-8a2d0055afa0071b483d36b7722aa775a7c3bb47d6521257f96309c63f95e960-merged.mount: Deactivated successfully.
Dec  6 04:54:37 np0005548915 podman[188256]: 2025-12-06 09:54:37.571082098 +0000 UTC m=+0.705623576 container remove 3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  6 04:54:37 np0005548915 systemd[1]: libpod-conmon-3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550.scope: Deactivated successfully.
Dec  6 04:54:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:37 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:38.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:38.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:54:38 np0005548915 podman[188403]: 2025-12-06 09:54:38.205541928 +0000 UTC m=+0.040370684 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:54:38 np0005548915 podman[188403]: 2025-12-06 09:54:38.372668791 +0000 UTC m=+0.207497467 container create c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  6 04:54:38 np0005548915 systemd[1]: Started libpod-conmon-c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a.scope.
Dec  6 04:54:38 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:54:38 np0005548915 podman[188403]: 2025-12-06 09:54:38.470707944 +0000 UTC m=+0.305536670 container init c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_northcutt, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 04:54:38 np0005548915 podman[188403]: 2025-12-06 09:54:38.482196695 +0000 UTC m=+0.317025351 container start c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_northcutt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:54:38 np0005548915 suspicious_northcutt[188419]: 167 167
Dec  6 04:54:38 np0005548915 systemd[1]: libpod-c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a.scope: Deactivated successfully.
Dec  6 04:54:38 np0005548915 podman[188403]: 2025-12-06 09:54:38.495334809 +0000 UTC m=+0.330163505 container attach c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_northcutt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 04:54:38 np0005548915 podman[188403]: 2025-12-06 09:54:38.496403539 +0000 UTC m=+0.331232205 container died c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 04:54:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:38 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:38 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f93b2de88826d08d80749233cbe88df2319425a9868aa74bd4647e6b0dc5454d-merged.mount: Deactivated successfully.
Dec  6 04:54:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:54:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:54:38 np0005548915 podman[188403]: 2025-12-06 09:54:38.948553784 +0000 UTC m=+0.783382480 container remove c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_northcutt, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 04:54:38 np0005548915 systemd[1]: libpod-conmon-c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a.scope: Deactivated successfully.
Dec  6 04:54:39 np0005548915 podman[188445]: 2025-12-06 09:54:39.206734821 +0000 UTC m=+0.052786080 container create c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  6 04:54:39 np0005548915 systemd[1]: Started libpod-conmon-c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a.scope.
Dec  6 04:54:39 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:54:39 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3fbefd1f207008146e23d8913278437c1fd7124c048c76b6b154a4838fe9b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:54:39 np0005548915 podman[188445]: 2025-12-06 09:54:39.186631507 +0000 UTC m=+0.032682776 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:54:39 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3fbefd1f207008146e23d8913278437c1fd7124c048c76b6b154a4838fe9b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:54:39 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3fbefd1f207008146e23d8913278437c1fd7124c048c76b6b154a4838fe9b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:54:39 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3fbefd1f207008146e23d8913278437c1fd7124c048c76b6b154a4838fe9b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:54:39 np0005548915 podman[188445]: 2025-12-06 09:54:39.296200612 +0000 UTC m=+0.142251901 container init c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  6 04:54:39 np0005548915 podman[188445]: 2025-12-06 09:54:39.307931029 +0000 UTC m=+0.153982288 container start c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  6 04:54:39 np0005548915 podman[188445]: 2025-12-06 09:54:39.312518114 +0000 UTC m=+0.158569383 container attach c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bohr, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:54:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:39 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]: {
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:    "1": [
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:        {
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:            "devices": [
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:                "/dev/loop3"
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:            ],
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:            "lv_name": "ceph_lv0",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:            "lv_size": "21470642176",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:            "name": "ceph_lv0",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:            "tags": {
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:                "ceph.cluster_name": "ceph",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:                "ceph.crush_device_class": "",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:                "ceph.encrypted": "0",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:                "ceph.osd_id": "1",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:                "ceph.type": "block",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:                "ceph.vdo": "0",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:                "ceph.with_tpm": "0"
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:            },
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:            "type": "block",
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:            "vg_name": "ceph_vg0"
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:        }
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]:    ]
Dec  6 04:54:39 np0005548915 crazy_bohr[188462]: }
Dec  6 04:54:39 np0005548915 systemd[1]: libpod-c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a.scope: Deactivated successfully.
Dec  6 04:54:39 np0005548915 podman[188445]: 2025-12-06 09:54:39.601392111 +0000 UTC m=+0.447443360 container died c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bohr, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  6 04:54:39 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1f3fbefd1f207008146e23d8913278437c1fd7124c048c76b6b154a4838fe9b7-merged.mount: Deactivated successfully.
Dec  6 04:54:39 np0005548915 podman[188445]: 2025-12-06 09:54:39.718810548 +0000 UTC m=+0.564861797 container remove c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  6 04:54:39 np0005548915 systemd[1]: libpod-conmon-c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a.scope: Deactivated successfully.
Dec  6 04:54:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:39 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:54:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:40 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:54:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:40.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:40.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:54:40 np0005548915 podman[188712]: 2025-12-06 09:54:40.386173458 +0000 UTC m=+0.048198735 container create 1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 04:54:40 np0005548915 systemd[1]: Started libpod-conmon-1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392.scope.
Dec  6 04:54:40 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:54:40 np0005548915 podman[188712]: 2025-12-06 09:54:40.364912163 +0000 UTC m=+0.026937500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:54:40 np0005548915 podman[188712]: 2025-12-06 09:54:40.47273749 +0000 UTC m=+0.134762797 container init 1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brown, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:54:40 np0005548915 podman[188712]: 2025-12-06 09:54:40.479045871 +0000 UTC m=+0.141071148 container start 1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  6 04:54:40 np0005548915 podman[188712]: 2025-12-06 09:54:40.482551056 +0000 UTC m=+0.144576353 container attach 1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  6 04:54:40 np0005548915 recursing_brown[188741]: 167 167
Dec  6 04:54:40 np0005548915 systemd[1]: libpod-1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392.scope: Deactivated successfully.
Dec  6 04:54:40 np0005548915 podman[188712]: 2025-12-06 09:54:40.486791811 +0000 UTC m=+0.148817088 container died 1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brown, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:54:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:40 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:40 np0005548915 systemd[1]: var-lib-containers-storage-overlay-57eac60fa28e7fed0f347fd475667cf0d57e693d9a158b12536d56950ea01b1b-merged.mount: Deactivated successfully.
Dec  6 04:54:40 np0005548915 podman[188712]: 2025-12-06 09:54:40.544657007 +0000 UTC m=+0.206682284 container remove 1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brown, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  6 04:54:40 np0005548915 systemd[1]: libpod-conmon-1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392.scope: Deactivated successfully.
Dec  6 04:54:40 np0005548915 podman[188784]: 2025-12-06 09:54:40.743279852 +0000 UTC m=+0.068676540 container create f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cartwright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  6 04:54:40 np0005548915 systemd[1]: Started libpod-conmon-f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5.scope.
Dec  6 04:54:40 np0005548915 podman[188784]: 2025-12-06 09:54:40.717174575 +0000 UTC m=+0.042571333 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:54:40 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:54:40 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df05a198671661c5d7deb5fc105c786e169169872ad92407f3d76e6b9009583/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:54:40 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df05a198671661c5d7deb5fc105c786e169169872ad92407f3d76e6b9009583/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:54:40 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df05a198671661c5d7deb5fc105c786e169169872ad92407f3d76e6b9009583/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:54:40 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df05a198671661c5d7deb5fc105c786e169169872ad92407f3d76e6b9009583/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:54:40 np0005548915 podman[188784]: 2025-12-06 09:54:40.85038709 +0000 UTC m=+0.175783808 container init f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:54:40 np0005548915 podman[188784]: 2025-12-06 09:54:40.859665641 +0000 UTC m=+0.185062329 container start f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:54:40 np0005548915 podman[188784]: 2025-12-06 09:54:40.863269499 +0000 UTC m=+0.188666187 container attach f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:54:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:40] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:54:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:40] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:54:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:41 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:41 np0005548915 lvm[188890]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:54:41 np0005548915 lvm[188890]: VG ceph_vg0 finished
Dec  6 04:54:41 np0005548915 pedantic_cartwright[188814]: {}
Dec  6 04:54:41 np0005548915 systemd[1]: libpod-f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5.scope: Deactivated successfully.
Dec  6 04:54:41 np0005548915 podman[188784]: 2025-12-06 09:54:41.767550579 +0000 UTC m=+1.092947297 container died f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cartwright, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec  6 04:54:41 np0005548915 systemd[1]: libpod-f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5.scope: Consumed 1.322s CPU time.
Dec  6 04:54:41 np0005548915 systemd[1]: var-lib-containers-storage-overlay-2df05a198671661c5d7deb5fc105c786e169169872ad92407f3d76e6b9009583-merged.mount: Deactivated successfully.
Dec  6 04:54:41 np0005548915 podman[188784]: 2025-12-06 09:54:41.812017613 +0000 UTC m=+1.137414291 container remove f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cartwright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  6 04:54:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:41 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:41 np0005548915 systemd[1]: libpod-conmon-f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5.scope: Deactivated successfully.
Dec  6 04:54:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:54:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:54:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:42.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:42.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:54:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:42 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:42 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:42 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:54:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:43 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:43 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:44.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:54:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:44.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:54:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 04:54:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:44 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:44 np0005548915 systemd[1]: Stopping OpenSSH server daemon...
Dec  6 04:54:44 np0005548915 systemd[1]: sshd.service: Deactivated successfully.
Dec  6 04:54:44 np0005548915 systemd[1]: Stopped OpenSSH server daemon.
Dec  6 04:54:44 np0005548915 systemd[1]: sshd.service: Consumed 2.954s CPU time, read 32.0K from disk, written 0B to disk.
Dec  6 04:54:44 np0005548915 systemd[1]: Stopped target sshd-keygen.target.
Dec  6 04:54:44 np0005548915 systemd[1]: Stopping sshd-keygen.target...
Dec  6 04:54:44 np0005548915 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  6 04:54:44 np0005548915 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  6 04:54:44 np0005548915 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  6 04:54:44 np0005548915 systemd[1]: Reached target sshd-keygen.target.
Dec  6 04:54:44 np0005548915 systemd[1]: Starting OpenSSH server daemon...
Dec  6 04:54:44 np0005548915 systemd[1]: Started OpenSSH server daemon.
Dec  6 04:54:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:54:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095445 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:54:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:46.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:46.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:54:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:46 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:47 np0005548915 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 04:54:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:47.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:54:47 np0005548915 systemd[1]: Starting man-db-cache-update.service...
Dec  6 04:54:47 np0005548915 systemd[1]: Reloading.
Dec  6 04:54:47 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:54:47 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:54:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:47 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:47 np0005548915 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 04:54:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:47 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:54:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:48.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:54:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:48.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:54:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:48 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:49 np0005548915 podman[191990]: 2025-12-06 09:54:49.474711785 +0000 UTC m=+0.099600297 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 04:54:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:49 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:49 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:54:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:54:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:50.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:54:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:50.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:54:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:50 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:50] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:54:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:50] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:54:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:51 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:51 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:54:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:52.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:54:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:52.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:54:52 np0005548915 python3.9[194790]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 04:54:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:52 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:52 np0005548915 systemd[1]: Reloading.
Dec  6 04:54:52 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:54:52 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:54:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:53 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:53 np0005548915 python3.9[195933]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 04:54:53 np0005548915 systemd[1]: Reloading.
Dec  6 04:54:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:53 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:53 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:54:53 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:54:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:54:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:54:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:54:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:54:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:54:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:54:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:54:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:54:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:54.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:54.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:54:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:54:54.225 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 04:54:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:54:54.225 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 04:54:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:54:54.225 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 04:54:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:54 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:54:54 np0005548915 python3.9[197072]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 04:54:55 np0005548915 systemd[1]: Reloading.
Dec  6 04:54:55 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:54:55 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:54:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:55 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:55 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:54:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:56.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:54:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:56.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  6 04:54:56 np0005548915 python3.9[198388]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 04:54:56 np0005548915 systemd[1]: Reloading.
Dec  6 04:54:56 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:54:56 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:54:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:56 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:56 np0005548915 podman[198877]: 2025-12-06 09:54:56.700716849 +0000 UTC m=+0.064656501 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 04:54:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:57.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:54:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:57.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:54:57 np0005548915 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 04:54:57 np0005548915 systemd[1]: Finished man-db-cache-update.service.
Dec  6 04:54:57 np0005548915 systemd[1]: man-db-cache-update.service: Consumed 12.365s CPU time.
Dec  6 04:54:57 np0005548915 systemd[1]: run-r1915e722052b45ebaafc73df41e557bb.service: Deactivated successfully.
Dec  6 04:54:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:57 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:57 np0005548915 python3.9[199153]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:54:57 np0005548915 systemd[1]: Reloading.
Dec  6 04:54:57 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:54:57 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:54:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:57 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:58.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:54:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:54:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:58.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:54:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  6 04:54:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:58 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:58 np0005548915 python3.9[199462]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:54:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:59 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:59 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:54:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:55:00 np0005548915 systemd[1]: Reloading.
Dec  6 04:55:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:00.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:00.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:55:00 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:55:00 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:55:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:00 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:00] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  6 04:55:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:00] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  6 04:55:01 np0005548915 python3.9[199656]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:01 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:01 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:02.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:02.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:55:02 np0005548915 systemd[1]: Reloading.
Dec  6 04:55:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:02 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:02 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:55:02 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:55:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:03 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c500013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:03 np0005548915 python3.9[199850]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:03 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:04.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:55:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:04.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:55:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 04:55:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:04 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:04 np0005548915 python3.9[200005]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:04 np0005548915 systemd[1]: Reloading.
Dec  6 04:55:04 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:55:04 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:55:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:55:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:05 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:05 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c50001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:05 np0005548915 python3.9[200197]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 04:55:06 np0005548915 systemd[1]: Reloading.
Dec  6 04:55:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:55:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:06.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:55:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:06.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095506 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:55:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:55:06 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:55:06 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:55:06 np0005548915 systemd[1]: Listening on libvirt proxy daemon socket.
Dec  6 04:55:06 np0005548915 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec  6 04:55:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:06 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:07.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:55:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:07.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:55:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:07.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:55:07 np0005548915 python3.9[200391]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:07 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:07 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:08.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:08.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:55:08 np0005548915 python3.9[200547]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:08 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:55:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:55:09 np0005548915 python3.9[200728]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:09 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:09 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:55:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:10.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:10.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:55:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:10 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:10 np0005548915 python3.9[200885]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:10] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 04:55:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:10] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 04:55:11 np0005548915 python3.9[201041]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:11 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:11 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:12.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e65a15d0 =====
Dec  6 04:55:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e65a15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:12 np0005548915 radosgw[94308]: beast: 0x7f53e65a15d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:12.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:55:12 np0005548915 python3.9[201197]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:12 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:13 np0005548915 python3.9[201353]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:13 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:13 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e65a15d0 =====
Dec  6 04:55:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:14.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e65a15d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:14 np0005548915 radosgw[94308]: beast: 0x7f53e65a15d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:14.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:55:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:14 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:55:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:14 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:14 np0005548915 python3.9[201509]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:55:15 np0005548915 python3.9[201665]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:15 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:15 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e65a15d0 =====
Dec  6 04:55:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:55:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:16.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:55:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e65a15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:16 np0005548915 radosgw[94308]: beast: 0x7f53e65a15d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:16.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:55:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:16 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:17.058Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:55:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:17.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:55:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:17 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:55:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:17 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:55:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:17 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:17 np0005548915 python3.9[201823]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:17 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:55:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:18.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:55:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:18.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:55:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:18 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:19 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:19 np0005548915 python3.9[201979]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:19 np0005548915 podman[201982]: 2025-12-06 09:55:19.828173625 +0000 UTC m=+0.132433886 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  6 04:55:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:19 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:55:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:20.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  6 04:55:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:20.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  6 04:55:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:55:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:20 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:55:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:20 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:20 np0005548915 python3.9[202163]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:20] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 04:55:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:20] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 04:55:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:21 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:21 np0005548915 python3.9[202319]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:21 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:22.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:22.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:55:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:22 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:22 np0005548915 python3.9[202475]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 04:55:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:23 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:23 np0005548915 python3.9[202632]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:55:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:55:23
Dec  6 04:55:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:55:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:55:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'volumes', 'backups', '.nfs', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'images']
Dec  6 04:55:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:55:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:23 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:55:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:55:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:55:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:55:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:55:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:55:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:55:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:55:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:55:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:24.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:55:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:24.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:55:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:55:24 np0005548915 python3.9[202784]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:55:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:24 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:55:25 np0005548915 python3.9[202936]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:55:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:25 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:25 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:25 np0005548915 python3.9[203090]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:55:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095526 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:55:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:55:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:26.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:55:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:26.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec  6 04:55:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:26 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:26 np0005548915 python3.9[203242]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:55:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:27.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:55:27 np0005548915 podman[203367]: 2025-12-06 09:55:27.227735095 +0000 UTC m=+0.074766054 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 04:55:27 np0005548915 python3.9[203411]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:55:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:27 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:27 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:28.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:28.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:55:28 np0005548915 python3.9[203567]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:55:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:28 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:28 np0005548915 auditd[701]: Audit daemon rotating log files
Dec  6 04:55:29 np0005548915 python3.9[203717]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014927.643351-1622-248184091554598/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:29 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:29 np0005548915 python3.9[203871]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:55:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:29 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a4d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:55:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:30.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:30.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:55:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:30 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:30 np0005548915 python3.9[203996]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014929.2686815-1622-172598953764159/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:30] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:55:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:30] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  6 04:55:31 np0005548915 python3.9[204149]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:55:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:31 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:31 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:32 np0005548915 python3.9[204275]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014930.7922695-1622-137857862385184/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:32.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:55:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:32.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:32 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a4d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:32 np0005548915 python3.9[204427]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:55:33 np0005548915 python3.9[204553]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014932.2459354-1622-180131275088592/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a4d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:34.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:55:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:34.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:34 np0005548915 python3.9[204706]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:55:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:34 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:55:35 np0005548915 python3.9[204831]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014933.7830455-1622-4009720158039/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:35 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a4d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:35 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200038e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:35 np0005548915 python3.9[204985]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:55:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  6 04:55:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:36.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:36.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:36 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:36 np0005548915 python3.9[205110]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014935.3510785-1622-62454991196010/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:37.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:55:37 np0005548915 python3.9[205263]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:55:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:37 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:37 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a4d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  6 04:55:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:55:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:38.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:55:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:38.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:38 np0005548915 python3.9[205387]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014936.841242-1622-279820373136681/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:38 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003900 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:55:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:55:39 np0005548915 python3.9[205539]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:55:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:39 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:39 np0005548915 python3.9[205666]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014938.3662996-1622-245092066287171/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:39 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:55:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:55:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:40.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:40.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:40 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:40 np0005548915 python3.9[205818]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  6 04:55:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:40] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:55:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:40] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:55:41 np0005548915 python3.9[205972]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:41 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:41 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:55:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:42.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:42.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:42 np0005548915 python3.9[206127]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:42 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c50001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:43 np0005548915 python3.9[206380]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:43 np0005548915 podman[206402]: 2025-12-06 09:55:43.093397749 +0000 UTC m=+0.077591031 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:55:43 np0005548915 podman[206402]: 2025-12-06 09:55:43.214663741 +0000 UTC m=+0.198857003 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:55:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:43 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a5a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:43 np0005548915 podman[206675]: 2025-12-06 09:55:43.68283437 +0000 UTC m=+0.050672292 container exec 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:55:43 np0005548915 python3.9[206645]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:43 np0005548915 podman[206675]: 2025-12-06 09:55:43.690816556 +0000 UTC m=+0.058654478 container exec_died 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:55:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:43 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:44 np0005548915 podman[206850]: 2025-12-06 09:55:44.015348098 +0000 UTC m=+0.055108282 container exec c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  6 04:55:44 np0005548915 podman[206850]: 2025-12-06 09:55:44.054933769 +0000 UTC m=+0.094693943 container exec_died c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Dec  6 04:55:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 04:55:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:44.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:44.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:44 np0005548915 podman[206981]: 2025-12-06 09:55:44.254078718 +0000 UTC m=+0.052295026 container exec 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 04:55:44 np0005548915 podman[206981]: 2025-12-06 09:55:44.264778908 +0000 UTC m=+0.062995206 container exec_died 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 04:55:44 np0005548915 python3.9[206966]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:44 np0005548915 podman[207052]: 2025-12-06 09:55:44.471867482 +0000 UTC m=+0.056543871 container exec d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, distribution-scope=public, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc.)
Dec  6 04:55:44 np0005548915 podman[207052]: 2025-12-06 09:55:44.480867616 +0000 UTC m=+0.065543935 container exec_died d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, io.openshift.tags=Ceph keepalived, vcs-type=git, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, description=keepalived for Ceph, release=1793, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2)
Dec  6 04:55:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:44 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:44 np0005548915 podman[207194]: 2025-12-06 09:55:44.686942612 +0000 UTC m=+0.052993535 container exec b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:55:44 np0005548915 podman[207194]: 2025-12-06 09:55:44.745784854 +0000 UTC m=+0.111835787 container exec_died b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:55:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:55:45 np0005548915 podman[207328]: 2025-12-06 09:55:45.009567993 +0000 UTC m=+0.079548504 container exec fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:55:45 np0005548915 python3.9[207314]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:45 np0005548915 podman[207328]: 2025-12-06 09:55:45.205424603 +0000 UTC m=+0.275405154 container exec_died fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 04:55:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c50001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:45 np0005548915 podman[207570]: 2025-12-06 09:55:45.625926002 +0000 UTC m=+0.056474220 container exec cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:55:45 np0005548915 podman[207570]: 2025-12-06 09:55:45.662838581 +0000 UTC m=+0.093386799 container exec_died cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 04:55:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:55:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:55:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:55:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:55:45 np0005548915 python3.9[207602]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a5c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:55:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:46.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:46.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:55:46 np0005548915 python3.9[207852]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:46 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:55:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:55:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:47.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:55:47 np0005548915 podman[208108]: 2025-12-06 09:55:47.333688296 +0000 UTC m=+0.046549710 container create e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Dec  6 04:55:47 np0005548915 systemd[1]: Started libpod-conmon-e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600.scope.
Dec  6 04:55:47 np0005548915 podman[208108]: 2025-12-06 09:55:47.313587142 +0000 UTC m=+0.026448586 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:55:47 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:55:47 np0005548915 podman[208108]: 2025-12-06 09:55:47.427464034 +0000 UTC m=+0.140325478 container init e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 04:55:47 np0005548915 podman[208108]: 2025-12-06 09:55:47.436766125 +0000 UTC m=+0.149627539 container start e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lalande, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Dec  6 04:55:47 np0005548915 python3.9[208107]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:47 np0005548915 podman[208108]: 2025-12-06 09:55:47.440031004 +0000 UTC m=+0.152892408 container attach e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lalande, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:55:47 np0005548915 nifty_lalande[208125]: 167 167
Dec  6 04:55:47 np0005548915 systemd[1]: libpod-e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600.scope: Deactivated successfully.
Dec  6 04:55:47 np0005548915 conmon[208125]: conmon e785b8b48e74ef7b237b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600.scope/container/memory.events
Dec  6 04:55:47 np0005548915 podman[208108]: 2025-12-06 09:55:47.444565127 +0000 UTC m=+0.157426551 container died e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lalande, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Dec  6 04:55:47 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f28ba16870bf5525c4a33c90562aa23dd9198028b5eb04beb59eeec75b0bbd4d-merged.mount: Deactivated successfully.
Dec  6 04:55:47 np0005548915 podman[208108]: 2025-12-06 09:55:47.487409646 +0000 UTC m=+0.200271060 container remove e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lalande, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:55:47 np0005548915 systemd[1]: libpod-conmon-e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600.scope: Deactivated successfully.
Dec  6 04:55:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:47 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:47 np0005548915 podman[208198]: 2025-12-06 09:55:47.648413963 +0000 UTC m=+0.045134122 container create 50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:55:47 np0005548915 systemd[1]: Started libpod-conmon-50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236.scope.
Dec  6 04:55:47 np0005548915 podman[208198]: 2025-12-06 09:55:47.629890182 +0000 UTC m=+0.026610361 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:55:47 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:55:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b89be88a57b722b00b1288f3dca62438dcc5ff595485c2be00232ad257a86d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:55:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b89be88a57b722b00b1288f3dca62438dcc5ff595485c2be00232ad257a86d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:55:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b89be88a57b722b00b1288f3dca62438dcc5ff595485c2be00232ad257a86d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:55:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b89be88a57b722b00b1288f3dca62438dcc5ff595485c2be00232ad257a86d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:55:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b89be88a57b722b00b1288f3dca62438dcc5ff595485c2be00232ad257a86d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:55:47 np0005548915 podman[208198]: 2025-12-06 09:55:47.753078245 +0000 UTC m=+0.149798414 container init 50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec  6 04:55:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:55:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:55:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:55:47 np0005548915 podman[208198]: 2025-12-06 09:55:47.760803345 +0000 UTC m=+0.157523504 container start 50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  6 04:55:47 np0005548915 podman[208198]: 2025-12-06 09:55:47.764633088 +0000 UTC m=+0.161353347 container attach 50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  6 04:55:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:47 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:48 np0005548915 python3.9[208323]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:48 np0005548915 busy_wilbur[208266]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:55:48 np0005548915 busy_wilbur[208266]: --> All data devices are unavailable
Dec  6 04:55:48 np0005548915 systemd[1]: libpod-50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236.scope: Deactivated successfully.
Dec  6 04:55:48 np0005548915 podman[208198]: 2025-12-06 09:55:48.108031131 +0000 UTC m=+0.504751310 container died 50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  6 04:55:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 04:55:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:48.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:48 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f8b89be88a57b722b00b1288f3dca62438dcc5ff595485c2be00232ad257a86d-merged.mount: Deactivated successfully.
Dec  6 04:55:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:48.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:48 np0005548915 podman[208198]: 2025-12-06 09:55:48.154748515 +0000 UTC m=+0.551468674 container remove 50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:55:48 np0005548915 systemd[1]: libpod-conmon-50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236.scope: Deactivated successfully.
Dec  6 04:55:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:48 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:48 np0005548915 python3.9[208591]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:48 np0005548915 podman[208613]: 2025-12-06 09:55:48.804838777 +0000 UTC m=+0.064442095 container create b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:55:48 np0005548915 systemd[1]: Started libpod-conmon-b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9.scope.
Dec  6 04:55:48 np0005548915 podman[208613]: 2025-12-06 09:55:48.770748314 +0000 UTC m=+0.030351712 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:55:48 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:55:48 np0005548915 podman[208613]: 2025-12-06 09:55:48.899205141 +0000 UTC m=+0.158808449 container init b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:55:48 np0005548915 podman[208613]: 2025-12-06 09:55:48.905632984 +0000 UTC m=+0.165236282 container start b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_wilbur, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:55:48 np0005548915 podman[208613]: 2025-12-06 09:55:48.909068068 +0000 UTC m=+0.168671356 container attach b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_wilbur, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:55:48 np0005548915 frosty_wilbur[208630]: 167 167
Dec  6 04:55:48 np0005548915 systemd[1]: libpod-b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9.scope: Deactivated successfully.
Dec  6 04:55:48 np0005548915 conmon[208630]: conmon b5c887abc200c829b4f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9.scope/container/memory.events
Dec  6 04:55:48 np0005548915 podman[208613]: 2025-12-06 09:55:48.912223553 +0000 UTC m=+0.171826841 container died b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec  6 04:55:48 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1e7fe66a378f44485885686395de6d436271d6b0bcb4b7e48340f8e371191500-merged.mount: Deactivated successfully.
Dec  6 04:55:48 np0005548915 podman[208613]: 2025-12-06 09:55:48.948410613 +0000 UTC m=+0.208013901 container remove b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_wilbur, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:55:48 np0005548915 systemd[1]: libpod-conmon-b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9.scope: Deactivated successfully.
Dec  6 04:55:49 np0005548915 podman[208730]: 2025-12-06 09:55:49.117302933 +0000 UTC m=+0.039695515 container create 19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 04:55:49 np0005548915 systemd[1]: Started libpod-conmon-19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab.scope.
Dec  6 04:55:49 np0005548915 podman[208730]: 2025-12-06 09:55:49.099553753 +0000 UTC m=+0.021946355 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:55:49 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:55:49 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c860b10170991bcc73349f6db4860167a6fb43e510126a8c23e1a959416e64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:55:49 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c860b10170991bcc73349f6db4860167a6fb43e510126a8c23e1a959416e64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:55:49 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c860b10170991bcc73349f6db4860167a6fb43e510126a8c23e1a959416e64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:55:49 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c860b10170991bcc73349f6db4860167a6fb43e510126a8c23e1a959416e64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:55:49 np0005548915 podman[208730]: 2025-12-06 09:55:49.218529572 +0000 UTC m=+0.140922224 container init 19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_driscoll, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  6 04:55:49 np0005548915 podman[208730]: 2025-12-06 09:55:49.235922853 +0000 UTC m=+0.158315425 container start 19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:55:49 np0005548915 podman[208730]: 2025-12-06 09:55:49.239441199 +0000 UTC m=+0.161833891 container attach 19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  6 04:55:49 np0005548915 python3.9[208826]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]: {
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:    "1": [
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:        {
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:            "devices": [
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:                "/dev/loop3"
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:            ],
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:            "lv_name": "ceph_lv0",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:            "lv_size": "21470642176",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:            "name": "ceph_lv0",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:            "tags": {
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:                "ceph.cluster_name": "ceph",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:                "ceph.crush_device_class": "",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:                "ceph.encrypted": "0",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:                "ceph.osd_id": "1",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:                "ceph.type": "block",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:                "ceph.vdo": "0",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:                "ceph.with_tpm": "0"
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:            },
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:            "type": "block",
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:            "vg_name": "ceph_vg0"
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:        }
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]:    ]
Dec  6 04:55:49 np0005548915 kind_driscoll[208770]: }
Dec  6 04:55:49 np0005548915 systemd[1]: libpod-19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab.scope: Deactivated successfully.
Dec  6 04:55:49 np0005548915 podman[208730]: 2025-12-06 09:55:49.584187628 +0000 UTC m=+0.506580230 container died 19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  6 04:55:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:49 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c500027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:49 np0005548915 systemd[1]: var-lib-containers-storage-overlay-b5c860b10170991bcc73349f6db4860167a6fb43e510126a8c23e1a959416e64-merged.mount: Deactivated successfully.
Dec  6 04:55:49 np0005548915 podman[208730]: 2025-12-06 09:55:49.638350453 +0000 UTC m=+0.560743075 container remove 19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:55:49 np0005548915 systemd[1]: libpod-conmon-19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab.scope: Deactivated successfully.
Dec  6 04:55:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:49 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:55:50 np0005548915 podman[208976]: 2025-12-06 09:55:50.078411142 +0000 UTC m=+0.151851371 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec  6 04:55:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:55:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:50.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:50.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:50 np0005548915 python3.9[209088]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:50 np0005548915 podman[209114]: 2025-12-06 09:55:50.42458784 +0000 UTC m=+0.061339811 container create f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hawking, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  6 04:55:50 np0005548915 systemd[1]: Started libpod-conmon-f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845.scope.
Dec  6 04:55:50 np0005548915 podman[209114]: 2025-12-06 09:55:50.394183597 +0000 UTC m=+0.030935618 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:55:50 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:55:50 np0005548915 podman[209114]: 2025-12-06 09:55:50.532009637 +0000 UTC m=+0.168761648 container init f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hawking, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 04:55:50 np0005548915 podman[209114]: 2025-12-06 09:55:50.543004244 +0000 UTC m=+0.179756215 container start f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:55:50 np0005548915 podman[209114]: 2025-12-06 09:55:50.54653907 +0000 UTC m=+0.183291071 container attach f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hawking, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  6 04:55:50 np0005548915 bold_hawking[209133]: 167 167
Dec  6 04:55:50 np0005548915 systemd[1]: libpod-f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845.scope: Deactivated successfully.
Dec  6 04:55:50 np0005548915 conmon[209133]: conmon f8c5341bee29cc6f7805 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845.scope/container/memory.events
Dec  6 04:55:50 np0005548915 podman[209114]: 2025-12-06 09:55:50.550113936 +0000 UTC m=+0.186865957 container died f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 04:55:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:50 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:50 np0005548915 systemd[1]: var-lib-containers-storage-overlay-dad6aae2a2412f931b92774cc3ecad3b5f1295c15bf41f7bebeb7f838e5f4e8f-merged.mount: Deactivated successfully.
Dec  6 04:55:50 np0005548915 podman[209114]: 2025-12-06 09:55:50.597109978 +0000 UTC m=+0.233861949 container remove f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:55:50 np0005548915 systemd[1]: libpod-conmon-f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845.scope: Deactivated successfully.
Dec  6 04:55:50 np0005548915 podman[209253]: 2025-12-06 09:55:50.799725582 +0000 UTC m=+0.053384267 container create 5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lumiere, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  6 04:55:50 np0005548915 systemd[1]: Started libpod-conmon-5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871.scope.
Dec  6 04:55:50 np0005548915 podman[209253]: 2025-12-06 09:55:50.77602073 +0000 UTC m=+0.029679495 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:55:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:50] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:55:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:50] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:55:50 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:55:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff0393fc3e5d3a9a10599976385b6d48b9766a0dff86894ded665167ccccf35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:55:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff0393fc3e5d3a9a10599976385b6d48b9766a0dff86894ded665167ccccf35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:55:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff0393fc3e5d3a9a10599976385b6d48b9766a0dff86894ded665167ccccf35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:55:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff0393fc3e5d3a9a10599976385b6d48b9766a0dff86894ded665167ccccf35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:55:50 np0005548915 podman[209253]: 2025-12-06 09:55:50.922893124 +0000 UTC m=+0.176551829 container init 5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:55:50 np0005548915 podman[209253]: 2025-12-06 09:55:50.931010313 +0000 UTC m=+0.184668998 container start 5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:55:50 np0005548915 podman[209253]: 2025-12-06 09:55:50.935433254 +0000 UTC m=+0.189091949 container attach 5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lumiere, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:55:51 np0005548915 python3.9[209327]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:51 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:51 np0005548915 lvm[209499]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:55:51 np0005548915 lvm[209499]: VG ceph_vg0 finished
Dec  6 04:55:51 np0005548915 condescending_lumiere[209294]: {}
Dec  6 04:55:51 np0005548915 systemd[1]: libpod-5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871.scope: Deactivated successfully.
Dec  6 04:55:51 np0005548915 systemd[1]: libpod-5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871.scope: Consumed 1.337s CPU time.
Dec  6 04:55:51 np0005548915 podman[209253]: 2025-12-06 09:55:51.742591536 +0000 UTC m=+0.996250281 container died 5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lumiere, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  6 04:55:51 np0005548915 systemd[1]: var-lib-containers-storage-overlay-2ff0393fc3e5d3a9a10599976385b6d48b9766a0dff86894ded665167ccccf35-merged.mount: Deactivated successfully.
Dec  6 04:55:51 np0005548915 podman[209253]: 2025-12-06 09:55:51.807655797 +0000 UTC m=+1.061314472 container remove 5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lumiere, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:55:51 np0005548915 systemd[1]: libpod-conmon-5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871.scope: Deactivated successfully.
Dec  6 04:55:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:51 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c500027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:55:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:55:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:55:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:55:52 np0005548915 python3.9[209567]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:55:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:55:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:52.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:52.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:52 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:52 np0005548915 python3.9[209715]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014951.448082-2285-90882704705739/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:52 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:55:52 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:55:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:53 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:53 np0005548915 python3.9[209869]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:55:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:53 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:55:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:55:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:55:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:55:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:55:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:55:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:55:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:55:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 04:55:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:54.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:54.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:55:54.226 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 04:55:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:55:54.226 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 04:55:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:55:54.227 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 04:55:54 np0005548915 python3.9[209992]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014953.0735557-2285-248124535653868/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:54 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c500034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:55:55 np0005548915 python3.9[210145]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:55:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:55 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:55 np0005548915 python3.9[210269]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014954.6756606-2285-262223880239663/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:55 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:55:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:55:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:56.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:55:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:55:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:56.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:55:56 np0005548915 python3.9[210421]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:55:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:56 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:57.062Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:55:57 np0005548915 python3.9[210544]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014955.9772198-2285-215934208566610/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:57 np0005548915 podman[210584]: 2025-12-06 09:55:57.44684255 +0000 UTC m=+0.067532559 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  6 04:55:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:57 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c500034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:57 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:57 np0005548915 python3.9[210717]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:55:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 04:55:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:58.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:55:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:55:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:58.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:55:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:58 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:58 np0005548915 python3.9[210840]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014957.375574-2285-211683571312907/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:59 np0005548915 python3.9[210993]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:55:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:59 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:59 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c500041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:55:59 np0005548915 python3.9[211117]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014958.7491493-2285-202535243719202/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:55:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:56:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:56:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:00.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:56:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:00.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:56:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:56:00 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy ignored for local
Dec  6 04:56:00 np0005548915 kernel: ganesha.nfsd[207909]: segfault at 50 ip 00007f1d03ebf32e sp 00007f1ccd7f9210 error 4 in libntirpc.so.5.8[7f1d03ea4000+2c000] likely on CPU 1 (core 0, socket 1)
Dec  6 04:56:00 np0005548915 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  6 04:56:00 np0005548915 systemd[1]: Started Process Core Dump (PID 211270/UID 0).
Dec  6 04:56:00 np0005548915 python3.9[211269]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:00] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:56:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:00] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:56:01 np0005548915 python3.9[211395]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014960.1505096-2285-96024825075782/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:01 np0005548915 python3.9[211548]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:02 np0005548915 systemd-coredump[211271]: Process 170881 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 65:#012#0  0x00007f1d03ebf32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  6 04:56:02 np0005548915 systemd[1]: systemd-coredump@5-211270-0.service: Deactivated successfully.
Dec  6 04:56:02 np0005548915 systemd[1]: systemd-coredump@5-211270-0.service: Consumed 1.433s CPU time.
Dec  6 04:56:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:56:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:02.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:02.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:02 np0005548915 podman[211618]: 2025-12-06 09:56:02.203309055 +0000 UTC m=+0.044013081 container died c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 04:56:02 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f7f99abc96062c26417ae3d5e3044f6541c1d626500d6a12b4f0ec41d1199e93-merged.mount: Deactivated successfully.
Dec  6 04:56:02 np0005548915 podman[211618]: 2025-12-06 09:56:02.248690393 +0000 UTC m=+0.089394349 container remove c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:56:02 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec  6 04:56:02 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec  6 04:56:02 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.212s CPU time.
Dec  6 04:56:02 np0005548915 python3.9[211717]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014961.4693465-2285-23129420167657/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:03 np0005548915 python3.9[211871]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:03 np0005548915 python3.9[211995]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014962.7917209-2285-119965784766259/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 04:56:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:04.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:56:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:04.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:56:04 np0005548915 python3.9[212147]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:56:05 np0005548915 python3.9[212270]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014964.1399703-2285-3098508630762/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:05 np0005548915 python3.9[212424]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:56:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:06.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:06.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:06 np0005548915 python3.9[212547]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014965.346088-2285-244284179857864/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095606 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:56:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:07.064Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:56:07 np0005548915 python3.9[212700]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:07 np0005548915 python3.9[212824]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014966.7370355-2285-272146555173941/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 04:56:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:08.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:56:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:08.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:56:08 np0005548915 python3.9[212976]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:56:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:56:09 np0005548915 python3.9[213125]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014968.1834977-2285-42969441720613/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:56:09 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec  6 04:56:09 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:09.997821) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 04:56:09 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec  6 04:56:09 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014969997903, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 3929, "num_deletes": 501, "total_data_size": 7898903, "memory_usage": 8015064, "flush_reason": "Manual Compaction"}
Dec  6 04:56:09 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014970053250, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 4439644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13301, "largest_seqno": 17229, "table_properties": {"data_size": 4428209, "index_size": 6457, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3909, "raw_key_size": 30991, "raw_average_key_size": 19, "raw_value_size": 4401079, "raw_average_value_size": 2824, "num_data_blocks": 282, "num_entries": 1558, "num_filter_entries": 1558, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765014557, "oldest_key_time": 1765014557, "file_creation_time": 1765014969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 55514 microseconds, and 18570 cpu microseconds.
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.053338) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 4439644 bytes OK
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.053375) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.055306) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.055328) EVENT_LOG_v1 {"time_micros": 1765014970055321, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.055356) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 7883055, prev total WAL file size 7883055, number of live WAL files 2.
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.058801) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(4335KB)], [32(13MB)]
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014970058920, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 18464469, "oldest_snapshot_seqno": -1}
Dec  6 04:56:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:56:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:10.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:10.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5028 keys, 13936017 bytes, temperature: kUnknown
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014970220570, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 13936017, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13900388, "index_size": 21951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 125831, "raw_average_key_size": 25, "raw_value_size": 13807284, "raw_average_value_size": 2746, "num_data_blocks": 917, "num_entries": 5028, "num_filter_entries": 5028, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765014970, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.220887) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 13936017 bytes
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.222938) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.1 rd, 86.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.2, 13.4 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(7.3) write-amplify(3.1) OK, records in: 5849, records dropped: 821 output_compression: NoCompression
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.222956) EVENT_LOG_v1 {"time_micros": 1765014970222947, "job": 14, "event": "compaction_finished", "compaction_time_micros": 161757, "compaction_time_cpu_micros": 44409, "output_level": 6, "num_output_files": 1, "total_output_size": 13936017, "num_input_records": 5849, "num_output_records": 5028, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014970223704, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014970226063, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.058610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.226524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.226531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.226533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.226535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:56:10 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.226537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:56:10 np0005548915 python3.9[213278]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:10] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  6 04:56:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:10] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  6 04:56:10 np0005548915 python3.9[213401]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014969.7462847-2285-116831586194896/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:11 np0005548915 python3.9[213553]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:56:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:56:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:12.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:12.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:12 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 6.
Dec  6 04:56:12 np0005548915 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:56:12 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.212s CPU time.
Dec  6 04:56:12 np0005548915 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 04:56:12 np0005548915 podman[213758]: 2025-12-06 09:56:12.697014947 +0000 UTC m=+0.050555596 container create 5d860964edcc2ae02d2071e13089b9e2f2642e3853757c3cef05b9c593c1e765 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:56:12 np0005548915 python3.9[213723]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  6 04:56:12 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38bb679519899423a10fd5aec53519d66c5cf90e4dcb5edc1f193a3cb3ab5273/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:12 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38bb679519899423a10fd5aec53519d66c5cf90e4dcb5edc1f193a3cb3ab5273/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:12 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38bb679519899423a10fd5aec53519d66c5cf90e4dcb5edc1f193a3cb3ab5273/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:12 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38bb679519899423a10fd5aec53519d66c5cf90e4dcb5edc1f193a3cb3ab5273/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:12 np0005548915 podman[213758]: 2025-12-06 09:56:12.675263539 +0000 UTC m=+0.028804238 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:56:12 np0005548915 podman[213758]: 2025-12-06 09:56:12.787810537 +0000 UTC m=+0.141351216 container init 5d860964edcc2ae02d2071e13089b9e2f2642e3853757c3cef05b9c593c1e765 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  6 04:56:12 np0005548915 podman[213758]: 2025-12-06 09:56:12.79312361 +0000 UTC m=+0.146664269 container start 5d860964edcc2ae02d2071e13089b9e2f2642e3853757c3cef05b9c593c1e765 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:56:12 np0005548915 bash[213758]: 5d860964edcc2ae02d2071e13089b9e2f2642e3853757c3cef05b9c593c1e765
Dec  6 04:56:12 np0005548915 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 04:56:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  6 04:56:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  6 04:56:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  6 04:56:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  6 04:56:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  6 04:56:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  6 04:56:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  6 04:56:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:56:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 04:56:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:56:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:14.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:56:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:14.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:14 np0005548915 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec  6 04:56:14 np0005548915 python3.9[213974]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:56:15 np0005548915 python3.9[214127]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:56:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:16.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:16.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:16 np0005548915 python3.9[214280]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:17 np0005548915 python3.9[214432]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:17.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:56:17 np0005548915 python3.9[214586]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 426 B/s wr, 1 op/s
Dec  6 04:56:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  6 04:56:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:18.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  6 04:56:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:18.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:18 np0005548915 python3.9[214738]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:56:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:56:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  6 04:56:19 np0005548915 python3.9[214891]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:56:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec  6 04:56:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:20.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:20.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:20 np0005548915 python3.9[215044]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:20 np0005548915 podman[215045]: 2025-12-06 09:56:20.478654642 +0000 UTC m=+0.152569308 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  6 04:56:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:20] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  6 04:56:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:20] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  6 04:56:21 np0005548915 python3.9[215222]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:21 np0005548915 python3.9[215376]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec  6 04:56:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095622 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:56:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  6 04:56:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:22.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  6 04:56:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:22.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:22 np0005548915 python3.9[215528]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:56:22 np0005548915 systemd[1]: Reloading.
Dec  6 04:56:22 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:56:22 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:56:22 np0005548915 systemd[1]: Starting libvirt logging daemon socket...
Dec  6 04:56:22 np0005548915 systemd[1]: Listening on libvirt logging daemon socket.
Dec  6 04:56:22 np0005548915 systemd[1]: Starting libvirt logging daemon admin socket...
Dec  6 04:56:22 np0005548915 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec  6 04:56:22 np0005548915 systemd[1]: Starting libvirt logging daemon...
Dec  6 04:56:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:56:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:56:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:56:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:23 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  6 04:56:23 np0005548915 systemd[1]: Started libvirt logging daemon.
Dec  6 04:56:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:56:23
Dec  6 04:56:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:56:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:56:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.meta']
Dec  6 04:56:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:56:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:56:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:56:23 np0005548915 python3.9[215723]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:56:23 np0005548915 systemd[1]: Reloading.
Dec  6 04:56:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:56:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:56:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:56:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:56:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:56:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:56:24 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:56:24 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 04:56:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:24.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:24.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:24 np0005548915 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec  6 04:56:24 np0005548915 systemd[1]: Starting libvirt nodedev daemon socket...
Dec  6 04:56:24 np0005548915 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec  6 04:56:24 np0005548915 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec  6 04:56:24 np0005548915 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec  6 04:56:24 np0005548915 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec  6 04:56:24 np0005548915 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec  6 04:56:24 np0005548915 systemd[1]: Starting libvirt nodedev daemon...
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:56:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:56:24 np0005548915 systemd[1]: Started libvirt nodedev daemon.
Dec  6 04:56:24 np0005548915 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec  6 04:56:24 np0005548915 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec  6 04:56:24 np0005548915 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec  6 04:56:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:56:25 np0005548915 python3.9[215948]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:56:25 np0005548915 systemd[1]: Reloading.
Dec  6 04:56:25 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:56:25 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:56:25 np0005548915 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec  6 04:56:25 np0005548915 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec  6 04:56:25 np0005548915 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec  6 04:56:25 np0005548915 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec  6 04:56:25 np0005548915 systemd[1]: Starting libvirt proxy daemon...
Dec  6 04:56:25 np0005548915 systemd[1]: Started libvirt proxy daemon.
Dec  6 04:56:25 np0005548915 setroubleshoot[215760]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 74c981a9-28dd-4b72-bb89-6fef8458e1c1
Dec  6 04:56:25 np0005548915 setroubleshoot[215760]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  6 04:56:25 np0005548915 setroubleshoot[215760]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 74c981a9-28dd-4b72-bb89-6fef8458e1c1
Dec  6 04:56:25 np0005548915 setroubleshoot[215760]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  6 04:56:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 04:56:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:26.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:26.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:26 np0005548915 python3.9[216163]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:56:26 np0005548915 systemd[1]: Reloading.
Dec  6 04:56:26 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:56:26 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:56:26 np0005548915 systemd[1]: Listening on libvirt locking daemon socket.
Dec  6 04:56:26 np0005548915 systemd[1]: Starting libvirt QEMU daemon socket...
Dec  6 04:56:26 np0005548915 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec  6 04:56:26 np0005548915 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec  6 04:56:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:56:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:56:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:56:26 np0005548915 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec  6 04:56:27 np0005548915 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec  6 04:56:27 np0005548915 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec  6 04:56:27 np0005548915 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec  6 04:56:27 np0005548915 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec  6 04:56:27 np0005548915 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec  6 04:56:27 np0005548915 systemd[1]: Starting libvirt QEMU daemon...
Dec  6 04:56:27 np0005548915 systemd[1]: Started libvirt QEMU daemon.
Dec  6 04:56:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:27.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:56:27 np0005548915 podman[216351]: 2025-12-06 09:56:27.628751186 +0000 UTC m=+0.078667924 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  6 04:56:27 np0005548915 python3.9[216390]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:56:27 np0005548915 systemd[1]: Reloading.
Dec  6 04:56:28 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:56:28 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:56:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 767 B/s wr, 3 op/s
Dec  6 04:56:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:28.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:28.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:28 np0005548915 systemd[1]: Starting libvirt secret daemon socket...
Dec  6 04:56:28 np0005548915 systemd[1]: Listening on libvirt secret daemon socket.
Dec  6 04:56:28 np0005548915 systemd[1]: Starting libvirt secret daemon admin socket...
Dec  6 04:56:28 np0005548915 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec  6 04:56:28 np0005548915 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec  6 04:56:28 np0005548915 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec  6 04:56:28 np0005548915 systemd[1]: Starting libvirt secret daemon...
Dec  6 04:56:28 np0005548915 systemd[1]: Started libvirt secret daemon.
Dec  6 04:56:29 np0005548915 python3.9[216632]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:56:30 np0005548915 python3.9[216785]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  6 04:56:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Dec  6 04:56:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:30.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:30.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:30] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:56:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:30] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:56:31 np0005548915 python3.9[216937]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:56:31 np0005548915 python3.9[217093]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  6 04:56:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Dec  6 04:56:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:32.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:32.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:32 np0005548915 python3.9[217243]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:56:33 np0005548915 python3.9[217378]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014992.3738215-3359-269067924263915/.source.xml follow=False _original_basename=secret.xml.j2 checksum=f7c948a7651e1e704e9fb6c67bea136c2b7876ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Dec  6 04:56:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:34.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:34.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:34 np0005548915 python3.9[217534]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 5ecd3f74-dade-5fc4-92ce-8950ae424258#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:56:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:56:35 np0005548915 python3.9[217696]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730000fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:35 np0005548915 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec  6 04:56:35 np0005548915 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.041s CPU time.
Dec  6 04:56:35 np0005548915 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec  6 04:56:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:56:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:56:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Dec  6 04:56:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:36.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:36.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095636 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:56:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c000d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:37.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:56:37 np0005548915 python3.9[218162]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:37 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:37 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec  6 04:56:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:38.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:38.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:38 np0005548915 python3.9[218315]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:38 np0005548915 python3.9[218438]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014997.829946-3524-90202119480329/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:56:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:56:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:56:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724001b20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:39 np0005548915 python3.9[218592]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:56:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 04:56:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:40.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:40.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:40 np0005548915 python3.9[218744]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:40] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:56:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:40] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:56:41 np0005548915 python3.9[218822]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:41 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:41 np0005548915 python3.9[218976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:41 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 04:56:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095642 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:56:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:42.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:42.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:42 np0005548915 python3.9[219054]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.eah_39xm recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724001b20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:43 np0005548915 python3.9[219207]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.369899) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015003369985, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 509, "num_deletes": 251, "total_data_size": 595705, "memory_usage": 604872, "flush_reason": "Manual Compaction"}
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015003375433, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 589687, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17231, "largest_seqno": 17738, "table_properties": {"data_size": 586862, "index_size": 861, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6542, "raw_average_key_size": 18, "raw_value_size": 581332, "raw_average_value_size": 1665, "num_data_blocks": 39, "num_entries": 349, "num_filter_entries": 349, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765014970, "oldest_key_time": 1765014970, "file_creation_time": 1765015003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 5611 microseconds, and 2650 cpu microseconds.
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.375522) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 589687 bytes OK
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.375546) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.379065) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.379080) EVENT_LOG_v1 {"time_micros": 1765015003379074, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.379100) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 592839, prev total WAL file size 592839, number of live WAL files 2.
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.379584) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(575KB)], [35(13MB)]
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015003379687, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 14525704, "oldest_snapshot_seqno": -1}
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4867 keys, 12333955 bytes, temperature: kUnknown
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015003510469, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 12333955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12300683, "index_size": 19978, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 123119, "raw_average_key_size": 25, "raw_value_size": 12211574, "raw_average_value_size": 2509, "num_data_blocks": 830, "num_entries": 4867, "num_filter_entries": 4867, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.510780) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12333955 bytes
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.512516) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.0 rd, 94.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 13.3 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(45.5) write-amplify(20.9) OK, records in: 5377, records dropped: 510 output_compression: NoCompression
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.512541) EVENT_LOG_v1 {"time_micros": 1765015003512528, "job": 16, "event": "compaction_finished", "compaction_time_micros": 130865, "compaction_time_cpu_micros": 47161, "output_level": 6, "num_output_files": 1, "total_output_size": 12333955, "num_input_records": 5377, "num_output_records": 4867, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015003512761, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015003515404, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.379429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.515547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.515556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.515560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.515562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:56:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.515565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:56:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:43 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:43 np0005548915 python3.9[219286]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:43 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 04:56:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:56:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:44.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:56:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:44.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:44 np0005548915 python3.9[219438]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:56:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:56:45 np0005548915 python3[219592]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  6 04:56:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec  6 04:56:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:46.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:46.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:46 np0005548915 python3.9[219745]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730002f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:46 np0005548915 python3.9[219823]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:47.069Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:56:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:47.069Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:56:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:47.070Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:56:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:47 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:47 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec  6 04:56:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:48.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:48.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:48 np0005548915 python3.9[219977]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:49 np0005548915 python3.9[220058]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:49 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730002f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:49 np0005548915 python3.9[220234]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:49 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002c90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:56:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  6 04:56:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:56:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:50.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:56:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:50.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:50 np0005548915 python3.9[220312]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:50] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:56:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:50] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:56:51 np0005548915 podman[220436]: 2025-12-06 09:56:51.028302077 +0000 UTC m=+0.092422685 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 04:56:51 np0005548915 python3.9[220483]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:51 np0005548915 python3.9[220569]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  6 04:56:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:56:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:52.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:56:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:52.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240035b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:52 np0005548915 python3.9[220771]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:56:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:56:53 np0005548915 python3.9[220947]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765015011.925411-3899-281101420578131/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:56:53 np0005548915 podman[221067]: 2025-12-06 09:56:53.574816344 +0000 UTC m=+0.044864362 container create 1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_bartik, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:56:53 np0005548915 systemd[1]: Started libpod-conmon-1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6.scope.
Dec  6 04:56:53 np0005548915 podman[221067]: 2025-12-06 09:56:53.555108472 +0000 UTC m=+0.025156510 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:56:53 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:56:53 np0005548915 podman[221067]: 2025-12-06 09:56:53.672177041 +0000 UTC m=+0.142225089 container init 1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  6 04:56:53 np0005548915 podman[221067]: 2025-12-06 09:56:53.685215974 +0000 UTC m=+0.155263982 container start 1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  6 04:56:53 np0005548915 podman[221067]: 2025-12-06 09:56:53.688136512 +0000 UTC m=+0.158184530 container attach 1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_bartik, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec  6 04:56:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:53 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:53 np0005548915 vigorous_bartik[221126]: 167 167
Dec  6 04:56:53 np0005548915 systemd[1]: libpod-1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6.scope: Deactivated successfully.
Dec  6 04:56:53 np0005548915 podman[221067]: 2025-12-06 09:56:53.693732723 +0000 UTC m=+0.163780761 container died 1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_bartik, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 04:56:53 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ec0d31c56aa7793ceda9f02cdcc236267312e0a81eb8fae695a4f5d400360a5e-merged.mount: Deactivated successfully.
Dec  6 04:56:53 np0005548915 podman[221067]: 2025-12-06 09:56:53.739431106 +0000 UTC m=+0.209479124 container remove 1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  6 04:56:53 np0005548915 systemd[1]: libpod-conmon-1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6.scope: Deactivated successfully.
Dec  6 04:56:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:53 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:56:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:56:53 np0005548915 podman[221212]: 2025-12-06 09:56:53.924967693 +0000 UTC m=+0.060268457 container create a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:56:53 np0005548915 systemd[1]: Started libpod-conmon-a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8.scope.
Dec  6 04:56:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:56:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:56:53 np0005548915 podman[221212]: 2025-12-06 09:56:53.896201517 +0000 UTC m=+0.031502371 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:56:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:56:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:56:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:56:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:56:54 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:56:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9197d0ade69b700cd333670e982e55db8544d24bc0507ff1954b7c0ebe15d7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9197d0ade69b700cd333670e982e55db8544d24bc0507ff1954b7c0ebe15d7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9197d0ade69b700cd333670e982e55db8544d24bc0507ff1954b7c0ebe15d7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9197d0ade69b700cd333670e982e55db8544d24bc0507ff1954b7c0ebe15d7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:54 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9197d0ade69b700cd333670e982e55db8544d24bc0507ff1954b7c0ebe15d7b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:54 np0005548915 podman[221212]: 2025-12-06 09:56:54.022148245 +0000 UTC m=+0.157449059 container init a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  6 04:56:54 np0005548915 python3.9[221206]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:54 np0005548915 podman[221212]: 2025-12-06 09:56:54.030436479 +0000 UTC m=+0.165737243 container start a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Dec  6 04:56:54 np0005548915 podman[221212]: 2025-12-06 09:56:54.034875088 +0000 UTC m=+0.170175852 container attach a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  6 04:56:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  6 04:56:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:54.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:56:54.227 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 04:56:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:56:54.229 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 04:56:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:56:54.229 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 04:56:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:56:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:54.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:56:54 np0005548915 interesting_khayyam[221228]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:56:54 np0005548915 interesting_khayyam[221228]: --> All data devices are unavailable
Dec  6 04:56:54 np0005548915 systemd[1]: libpod-a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8.scope: Deactivated successfully.
Dec  6 04:56:54 np0005548915 podman[221212]: 2025-12-06 09:56:54.399838487 +0000 UTC m=+0.535139251 container died a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:56:54 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c9197d0ade69b700cd333670e982e55db8544d24bc0507ff1954b7c0ebe15d7b-merged.mount: Deactivated successfully.
Dec  6 04:56:54 np0005548915 podman[221212]: 2025-12-06 09:56:54.448368087 +0000 UTC m=+0.583668841 container remove a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_khayyam, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  6 04:56:54 np0005548915 systemd[1]: libpod-conmon-a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8.scope: Deactivated successfully.
Dec  6 04:56:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:54 np0005548915 python3.9[221410]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:56:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:56:55 np0005548915 podman[221576]: 2025-12-06 09:56:55.071722827 +0000 UTC m=+0.054462830 container create 62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_rubin, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:56:55 np0005548915 systemd[1]: Started libpod-conmon-62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68.scope.
Dec  6 04:56:55 np0005548915 podman[221576]: 2025-12-06 09:56:55.044232336 +0000 UTC m=+0.026972439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:56:55 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:56:55 np0005548915 podman[221576]: 2025-12-06 09:56:55.167040619 +0000 UTC m=+0.149780642 container init 62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 04:56:55 np0005548915 podman[221576]: 2025-12-06 09:56:55.174910662 +0000 UTC m=+0.157650665 container start 62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_rubin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:56:55 np0005548915 podman[221576]: 2025-12-06 09:56:55.178229121 +0000 UTC m=+0.160969144 container attach 62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_rubin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  6 04:56:55 np0005548915 jovial_rubin[221593]: 167 167
Dec  6 04:56:55 np0005548915 systemd[1]: libpod-62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68.scope: Deactivated successfully.
Dec  6 04:56:55 np0005548915 conmon[221593]: conmon 62f8063e0b7f94a41f95 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68.scope/container/memory.events
Dec  6 04:56:55 np0005548915 podman[221576]: 2025-12-06 09:56:55.182069955 +0000 UTC m=+0.164809988 container died 62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:56:55 np0005548915 systemd[1]: var-lib-containers-storage-overlay-3abcd1be01ed1c672d71d63466d8b9f16afe5f879308bb32461651d27a51bd96-merged.mount: Deactivated successfully.
Dec  6 04:56:55 np0005548915 podman[221576]: 2025-12-06 09:56:55.221011816 +0000 UTC m=+0.203751819 container remove 62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:56:55 np0005548915 systemd[1]: libpod-conmon-62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68.scope: Deactivated successfully.
Dec  6 04:56:55 np0005548915 podman[221694]: 2025-12-06 09:56:55.441540537 +0000 UTC m=+0.055233821 container create 32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_borg, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Dec  6 04:56:55 np0005548915 systemd[1]: Started libpod-conmon-32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8.scope.
Dec  6 04:56:55 np0005548915 podman[221694]: 2025-12-06 09:56:55.414886168 +0000 UTC m=+0.028579432 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:56:55 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:56:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ede41ad918be1b1816f818664d0959174b51785fa887f6950e83dc82fad925/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ede41ad918be1b1816f818664d0959174b51785fa887f6950e83dc82fad925/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ede41ad918be1b1816f818664d0959174b51785fa887f6950e83dc82fad925/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ede41ad918be1b1816f818664d0959174b51785fa887f6950e83dc82fad925/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:55 np0005548915 python3.9[221691]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:55 np0005548915 podman[221694]: 2025-12-06 09:56:55.541145745 +0000 UTC m=+0.154839069 container init 32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:56:55 np0005548915 podman[221694]: 2025-12-06 09:56:55.551240748 +0000 UTC m=+0.164934032 container start 32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:56:55 np0005548915 podman[221694]: 2025-12-06 09:56:55.555857222 +0000 UTC m=+0.169550566 container attach 32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_borg, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  6 04:56:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:55 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240035b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:55 np0005548915 clever_borg[221710]: {
Dec  6 04:56:55 np0005548915 clever_borg[221710]:    "1": [
Dec  6 04:56:55 np0005548915 clever_borg[221710]:        {
Dec  6 04:56:55 np0005548915 clever_borg[221710]:            "devices": [
Dec  6 04:56:55 np0005548915 clever_borg[221710]:                "/dev/loop3"
Dec  6 04:56:55 np0005548915 clever_borg[221710]:            ],
Dec  6 04:56:55 np0005548915 clever_borg[221710]:            "lv_name": "ceph_lv0",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:            "lv_size": "21470642176",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:            "name": "ceph_lv0",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:            "tags": {
Dec  6 04:56:55 np0005548915 clever_borg[221710]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:                "ceph.cluster_name": "ceph",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:                "ceph.crush_device_class": "",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:                "ceph.encrypted": "0",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:                "ceph.osd_id": "1",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:                "ceph.type": "block",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:                "ceph.vdo": "0",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:                "ceph.with_tpm": "0"
Dec  6 04:56:55 np0005548915 clever_borg[221710]:            },
Dec  6 04:56:55 np0005548915 clever_borg[221710]:            "type": "block",
Dec  6 04:56:55 np0005548915 clever_borg[221710]:            "vg_name": "ceph_vg0"
Dec  6 04:56:55 np0005548915 clever_borg[221710]:        }
Dec  6 04:56:55 np0005548915 clever_borg[221710]:    ]
Dec  6 04:56:55 np0005548915 clever_borg[221710]: }
Dec  6 04:56:55 np0005548915 systemd[1]: libpod-32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8.scope: Deactivated successfully.
Dec  6 04:56:55 np0005548915 podman[221694]: 2025-12-06 09:56:55.889732891 +0000 UTC m=+0.503426145 container died 32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_borg, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:56:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:55 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:55 np0005548915 systemd[1]: var-lib-containers-storage-overlay-76ede41ad918be1b1816f818664d0959174b51785fa887f6950e83dc82fad925-merged.mount: Deactivated successfully.
Dec  6 04:56:55 np0005548915 podman[221694]: 2025-12-06 09:56:55.938944009 +0000 UTC m=+0.552637273 container remove 32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_borg, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:56:55 np0005548915 systemd[1]: libpod-conmon-32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8.scope: Deactivated successfully.
Dec  6 04:56:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:56:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:56.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:56.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:56 np0005548915 python3.9[221933]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:56:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:56 np0005548915 podman[222001]: 2025-12-06 09:56:56.633904643 +0000 UTC m=+0.053339661 container create 01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_snyder, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 04:56:56 np0005548915 systemd[1]: Started libpod-conmon-01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e.scope.
Dec  6 04:56:56 np0005548915 podman[222001]: 2025-12-06 09:56:56.612459734 +0000 UTC m=+0.031894792 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:56:56 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:56:56 np0005548915 podman[222001]: 2025-12-06 09:56:56.728892506 +0000 UTC m=+0.148327534 container init 01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_snyder, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:56:56 np0005548915 podman[222001]: 2025-12-06 09:56:56.740602182 +0000 UTC m=+0.160037190 container start 01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:56:56 np0005548915 podman[222001]: 2025-12-06 09:56:56.744721623 +0000 UTC m=+0.164156661 container attach 01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_snyder, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:56:56 np0005548915 awesome_snyder[222022]: 167 167
Dec  6 04:56:56 np0005548915 systemd[1]: libpod-01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e.scope: Deactivated successfully.
Dec  6 04:56:56 np0005548915 conmon[222022]: conmon 01845209bf092faadda9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e.scope/container/memory.events
Dec  6 04:56:56 np0005548915 podman[222001]: 2025-12-06 09:56:56.748210437 +0000 UTC m=+0.167645445 container died 01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Dec  6 04:56:56 np0005548915 systemd[1]: var-lib-containers-storage-overlay-5b8cba4244b40e648a46ba1124b3317889e24b213f87a33dd6e826666e582140-merged.mount: Deactivated successfully.
Dec  6 04:56:56 np0005548915 podman[222001]: 2025-12-06 09:56:56.793013216 +0000 UTC m=+0.212448244 container remove 01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_snyder, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:56:56 np0005548915 systemd[1]: libpod-conmon-01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e.scope: Deactivated successfully.
Dec  6 04:56:57 np0005548915 podman[222123]: 2025-12-06 09:56:57.025565202 +0000 UTC m=+0.074675627 container create da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  6 04:56:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:57.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:56:57 np0005548915 systemd[1]: Started libpod-conmon-da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52.scope.
Dec  6 04:56:57 np0005548915 podman[222123]: 2025-12-06 09:56:56.99250865 +0000 UTC m=+0.041619165 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:56:57 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:56:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ddfbcd1a51cd96c5ae5583fe322e9e916b5737437761038b7d14753c27cbab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ddfbcd1a51cd96c5ae5583fe322e9e916b5737437761038b7d14753c27cbab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ddfbcd1a51cd96c5ae5583fe322e9e916b5737437761038b7d14753c27cbab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ddfbcd1a51cd96c5ae5583fe322e9e916b5737437761038b7d14753c27cbab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:56:57 np0005548915 podman[222123]: 2025-12-06 09:56:57.139534938 +0000 UTC m=+0.188645403 container init da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  6 04:56:57 np0005548915 podman[222123]: 2025-12-06 09:56:57.15630664 +0000 UTC m=+0.205417045 container start da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:56:57 np0005548915 podman[222123]: 2025-12-06 09:56:57.160328808 +0000 UTC m=+0.209439233 container attach da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  6 04:56:57 np0005548915 python3.9[222189]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:56:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:57 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:57 np0005548915 podman[222378]: 2025-12-06 09:56:57.830055091 +0000 UTC m=+0.059544398 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec  6 04:56:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:57 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240035b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:57 np0005548915 lvm[222439]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:56:57 np0005548915 lvm[222439]: VG ceph_vg0 finished
Dec  6 04:56:57 np0005548915 upbeat_noether[222184]: {}
Dec  6 04:56:58 np0005548915 systemd[1]: libpod-da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52.scope: Deactivated successfully.
Dec  6 04:56:58 np0005548915 systemd[1]: libpod-da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52.scope: Consumed 1.337s CPU time.
Dec  6 04:56:58 np0005548915 podman[222123]: 2025-12-06 09:56:58.014377315 +0000 UTC m=+1.063487720 container died da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:56:58 np0005548915 python3.9[222432]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:56:58 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c0ddfbcd1a51cd96c5ae5583fe322e9e916b5737437761038b7d14753c27cbab-merged.mount: Deactivated successfully.
Dec  6 04:56:58 np0005548915 podman[222123]: 2025-12-06 09:56:58.097518199 +0000 UTC m=+1.146628614 container remove da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  6 04:56:58 np0005548915 systemd[1]: libpod-conmon-da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52.scope: Deactivated successfully.
Dec  6 04:56:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:56:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:56:58 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:56:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:56:58 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:56:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:58.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:56:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:56:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:58.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:56:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240035b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:58 np0005548915 python3.9[222632]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:56:59 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:56:59 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:56:59 np0005548915 python3.9[222785]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:56:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:59 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:56:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:59 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:57:00 np0005548915 python3.9[222909]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015019.05327-4115-272003002836899/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:00.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:00.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:00 np0005548915 python3.9[223061]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:57:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:00] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  6 04:57:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:00] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  6 04:57:01 np0005548915 python3.9[223185]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015020.3820698-4160-167767220194968/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240035b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:02 np0005548915 python3.9[223338]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:57:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:02.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:02.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:02 np0005548915 python3.9[223461]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015021.6593738-4205-222152015128410/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:03 np0005548915 python3.9[223614]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:57:03 np0005548915 systemd[1]: Reloading.
Dec  6 04:57:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:03 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:03 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:57:03 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:57:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:03 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240035b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:04 np0005548915 systemd[1]: Reached target edpm_libvirt.target.
Dec  6 04:57:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:57:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:04.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:04.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:04 np0005548915 python3.9[223805]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  6 04:57:04 np0005548915 systemd[1]: Reloading.
Dec  6 04:57:04 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:57:04 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:57:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:57:05 np0005548915 systemd[1]: Reloading.
Dec  6 04:57:05 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:57:05 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:57:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:05 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:05 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:06 np0005548915 systemd[1]: session-53.scope: Deactivated successfully.
Dec  6 04:57:06 np0005548915 systemd[1]: session-53.scope: Consumed 3min 48.573s CPU time.
Dec  6 04:57:06 np0005548915 systemd-logind[795]: Session 53 logged out. Waiting for processes to exit.
Dec  6 04:57:06 np0005548915 systemd-logind[795]: Removed session 53.
Dec  6 04:57:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:06.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:06.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754001d70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:57:07.073Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:57:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:07 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:07 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500013a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:08.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:08.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:57:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:57:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:09 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:09 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:57:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:10.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:10.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:10] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:57:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:10] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:57:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:11 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:11 np0005548915 systemd-logind[795]: New session 54 of user zuul.
Dec  6 04:57:11 np0005548915 systemd[1]: Started Session 54 of User zuul.
Dec  6 04:57:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:11 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:12.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:12.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:12 np0005548915 python3.9[224089]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:57:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:13 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:13 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:57:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:14.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:14.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:14 np0005548915 python3.9[224245]: ansible-ansible.builtin.service_facts Invoked
Dec  6 04:57:14 np0005548915 network[224262]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 04:57:14 np0005548915 network[224263]: 'network-scripts' will be removed from distribution in near future.
Dec  6 04:57:14 np0005548915 network[224264]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 04:57:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:57:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:15 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:15 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002da0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:16.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:16.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:57:17.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:57:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:17 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:17 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:18.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:18.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002da0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:18 np0005548915 python3.9[224540]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 04:57:19 np0005548915 python3.9[224625]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:57:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:19 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:19 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47540095a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:57:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:20.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:20.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:57:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:57:21 np0005548915 podman[224629]: 2025-12-06 09:57:21.500431249 +0000 UTC m=+0.125731544 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 04:57:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:21 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750003ab0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:21 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:22.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:22.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47540095a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:23 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:57:23
Dec  6 04:57:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:57:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:57:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'default.rgw.control', 'default.rgw.log', 'images', 'vms', '.nfs', 'default.rgw.meta', '.rgw.root']
Dec  6 04:57:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:57:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:57:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:57:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:23 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750003ab0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:57:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:57:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:57:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:57:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:57:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:57:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:24.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:24.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:57:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:57:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:57:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:25 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:25 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480041f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:26.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:26.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:26 np0005548915 python3.9[224810]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:57:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750003ab0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:57:27.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:57:27 np0005548915 python3.9[224963]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:57:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:27 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:27 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:28 np0005548915 podman[225089]: 2025-12-06 09:57:28.129436321 +0000 UTC m=+0.055473238 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 04:57:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:28.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:28.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:28 np0005548915 python3.9[225133]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:57:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480041f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:29 np0005548915 python3.9[225288]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:57:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:29 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:29 np0005548915 python3.9[225468]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:57:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:29 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:57:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:57:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:30.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:57:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:30.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:30 np0005548915 python3.9[225591]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015049.2918446-245-182972996494157/.source.iscsi _original_basename=.hf4jdjk9 follow=False checksum=99526e0d7ff5604cf6666b9c8f5aa83fcb820e36 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:30] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:57:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:30] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:57:31 np0005548915 python3.9[225744]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:31 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480041f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:31 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480041f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:32.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:32 np0005548915 python3.9[225897]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:32.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:33 np0005548915 python3.9[226050]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:57:33 np0005548915 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec  6 04:57:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:57:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:34.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:34.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:34 np0005548915 python3.9[226207]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:57:34 np0005548915 systemd[1]: Reloading.
Dec  6 04:57:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480041f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:34 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:57:34 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:57:34 np0005548915 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  6 04:57:34 np0005548915 systemd[1]: Starting Open-iSCSI...
Dec  6 04:57:34 np0005548915 kernel: Loading iSCSI transport class v2.0-870.
Dec  6 04:57:34 np0005548915 systemd[1]: Started Open-iSCSI.
Dec  6 04:57:34 np0005548915 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec  6 04:57:35 np0005548915 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec  6 04:57:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:57:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:35 np0005548915 python3.9[226409]: ansible-ansible.builtin.service_facts Invoked
Dec  6 04:57:36 np0005548915 network[226426]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 04:57:36 np0005548915 network[226427]: 'network-scripts' will be removed from distribution in near future.
Dec  6 04:57:36 np0005548915 network[226428]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 04:57:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:36.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:36.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:57:37.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:57:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:37 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480041f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:37 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:38.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:38.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:57:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:57:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480041f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:57:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095740 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:57:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:57:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:40.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:57:40 np0005548915 python3.9[226705]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  6 04:57:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:40.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:40] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  6 04:57:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:40] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  6 04:57:41 np0005548915 python3.9[226858]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec  6 04:57:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:41 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:41 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:42 np0005548915 python3.9[227016]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:57:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:57:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:42.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:42.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:42 np0005548915 python3.9[227139]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015061.6399589-476-240160424103381/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:43 np0005548915 python3.9[227292]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:43 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 04:57:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3994 writes, 18K keys, 3993 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s#012Cumulative WAL: 3994 writes, 3993 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1469 writes, 6211 keys, 1469 commit groups, 1.0 writes per commit group, ingest: 10.99 MB, 0.02 MB/s#012Interval WAL: 1469 writes, 1469 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     78.2      0.34              0.07         8    0.043       0      0       0.0       0.0#012  L6      1/0   11.76 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     92.1     78.9      1.15              0.26         7    0.164     32K   3649       0.0       0.0#012 Sum      1/0   11.76 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     70.8     78.7      1.49              0.33        15    0.100     32K   3649       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.6     82.3     79.2      0.76              0.18         8    0.094     20K   2298       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     92.1     78.9      1.15              0.26         7    0.164     32K   3649       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     79.6      0.34              0.07         7    0.048       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.026, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.11 GB write, 0.10 MB/s write, 0.10 GB read, 0.09 MB/s read, 1.5 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fd9a571350#2 capacity: 304.00 MB usage: 4.76 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000137 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(267,4.47 MB,1.47055%) FilterBlock(16,100.92 KB,0.0324199%) IndexBlock(16,194.95 KB,0.0626263%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 04:57:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:43 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 04:57:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:57:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:44.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:57:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:44.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:44 np0005548915 python3.9[227445]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:57:44 np0005548915 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  6 04:57:44 np0005548915 systemd[1]: Stopped Load Kernel Modules.
Dec  6 04:57:44 np0005548915 systemd[1]: Stopping Load Kernel Modules...
Dec  6 04:57:44 np0005548915 systemd[1]: Starting Load Kernel Modules...
Dec  6 04:57:44 np0005548915 systemd[1]: Finished Load Kernel Modules.
Dec  6 04:57:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:57:45 np0005548915 python3.9[227603]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:57:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:57:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:46.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:46.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:46 np0005548915 python3.9[227755]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:57:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:57:47.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:57:47 np0005548915 python3.9[227908]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:57:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:47 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:47 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:48 np0005548915 python3.9[228061]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:57:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:57:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:48.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:57:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:48.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:57:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:57:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:48 np0005548915 python3.9[228184]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015067.6142457-650-215611357130144/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:49 np0005548915 python3.9[228363]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:57:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:49 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:49 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:57:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:57:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:50.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:50.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:50 np0005548915 python3.9[228516]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:50] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  6 04:57:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:50] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  6 04:57:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 04:57:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 04:57:51 np0005548915 python3.9[228669]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:52 np0005548915 podman[228794]: 2025-12-06 09:57:52.009723936 +0000 UTC m=+0.133067361 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 04:57:52 np0005548915 python3.9[228846]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:57:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:57:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:52.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:57:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:52.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:52 np0005548915 python3.9[229004]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:53 np0005548915 python3.9[229158]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:53 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:57:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:57:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:53 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:57:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:57:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:57:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:57:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:57:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:57:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:57:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:57:54.229 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 04:57:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:57:54.230 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 04:57:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:57:54.230 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 04:57:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:54.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:57:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:54.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:57:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 04:57:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:54 np0005548915 python3.9[229310]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:57:55 np0005548915 python3.9[229463]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:55 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:55 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:56 np0005548915 python3.9[229616]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:57:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:57:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:57:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:56.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:57:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:56.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:56 np0005548915 python3.9[229770]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:57:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:57:57.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:57:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:57:57.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:57:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:57 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:57 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:57 np0005548915 python3.9[229924]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:57:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 04:57:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:57:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:58.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:57:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:57:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:57:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:58.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:57:58 np0005548915 podman[229949]: 2025-12-06 09:57:58.484407104 +0000 UTC m=+0.105034385 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  6 04:57:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:58 np0005548915 python3.9[230156]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:57:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:57:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:57:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:57:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:57:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:57:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:57:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:57:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:57:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:57:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:57:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:57:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:57:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:57:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:57:59 np0005548915 python3.9[230254]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:57:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:59 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:59 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:57:59 np0005548915 podman[230471]: 2025-12-06 09:57:59.966717534 +0000 UTC m=+0.050069572 container create e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_villani, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:58:00 np0005548915 systemd[1]: Started libpod-conmon-e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e.scope.
Dec  6 04:58:00 np0005548915 podman[230471]: 2025-12-06 09:57:59.943323462 +0000 UTC m=+0.026675550 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:58:00 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:58:00 np0005548915 podman[230471]: 2025-12-06 09:58:00.064627546 +0000 UTC m=+0.147979624 container init e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_villani, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  6 04:58:00 np0005548915 podman[230471]: 2025-12-06 09:58:00.081640965 +0000 UTC m=+0.164993003 container start e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_villani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:58:00 np0005548915 podman[230471]: 2025-12-06 09:58:00.086179187 +0000 UTC m=+0.169531275 container attach e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_villani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Dec  6 04:58:00 np0005548915 systemd[1]: libpod-e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e.scope: Deactivated successfully.
Dec  6 04:58:00 np0005548915 exciting_villani[230516]: 167 167
Dec  6 04:58:00 np0005548915 conmon[230516]: conmon e09fe20b244d7f95e13d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e.scope/container/memory.events
Dec  6 04:58:00 np0005548915 podman[230471]: 2025-12-06 09:58:00.094923104 +0000 UTC m=+0.178275182 container died e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_villani, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 04:58:00 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7b9757e85599f436fc206a07db0eac5a53cc25fa8c53bda0f852433e8d9b5684-merged.mount: Deactivated successfully.
Dec  6 04:58:00 np0005548915 podman[230471]: 2025-12-06 09:58:00.145427977 +0000 UTC m=+0.228780025 container remove e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  6 04:58:00 np0005548915 systemd[1]: libpod-conmon-e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e.scope: Deactivated successfully.
Dec  6 04:58:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:58:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:58:00 np0005548915 python3.9[230518]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:58:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095800 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 04:58:00 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:58:00 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:58:00 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:58:00 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:58:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.003000080s ======
Dec  6 04:58:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:00.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Dec  6 04:58:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:58:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:00.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:58:00 np0005548915 podman[230543]: 2025-12-06 09:58:00.386458881 +0000 UTC m=+0.069042505 container create 17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_maxwell, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Dec  6 04:58:00 np0005548915 systemd[1]: Started libpod-conmon-17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d.scope.
Dec  6 04:58:00 np0005548915 podman[230543]: 2025-12-06 09:58:00.356996385 +0000 UTC m=+0.039580029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:58:00 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:58:00 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b77580a8fa4a77d6aee359b8d7fbde02a56a89920cc2538fd57d60d575ac28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:00 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b77580a8fa4a77d6aee359b8d7fbde02a56a89920cc2538fd57d60d575ac28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:00 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b77580a8fa4a77d6aee359b8d7fbde02a56a89920cc2538fd57d60d575ac28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:00 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b77580a8fa4a77d6aee359b8d7fbde02a56a89920cc2538fd57d60d575ac28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:00 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b77580a8fa4a77d6aee359b8d7fbde02a56a89920cc2538fd57d60d575ac28/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:00 np0005548915 podman[230543]: 2025-12-06 09:58:00.513972621 +0000 UTC m=+0.196556295 container init 17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_maxwell, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:58:00 np0005548915 podman[230543]: 2025-12-06 09:58:00.530258831 +0000 UTC m=+0.212842455 container start 17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 04:58:00 np0005548915 podman[230543]: 2025-12-06 09:58:00.536043097 +0000 UTC m=+0.218626691 container attach 17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 04:58:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:00 np0005548915 python3.9[230639]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:58:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:00] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:58:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:00] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:58:00 np0005548915 stoic_maxwell[230583]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:58:00 np0005548915 stoic_maxwell[230583]: --> All data devices are unavailable
Dec  6 04:58:00 np0005548915 systemd[1]: libpod-17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d.scope: Deactivated successfully.
Dec  6 04:58:00 np0005548915 podman[230543]: 2025-12-06 09:58:00.979080412 +0000 UTC m=+0.661664036 container died 17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_maxwell, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  6 04:58:01 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a4b77580a8fa4a77d6aee359b8d7fbde02a56a89920cc2538fd57d60d575ac28-merged.mount: Deactivated successfully.
Dec  6 04:58:01 np0005548915 podman[230543]: 2025-12-06 09:58:01.054011364 +0000 UTC m=+0.736594988 container remove 17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_maxwell, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:58:01 np0005548915 systemd[1]: libpod-conmon-17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d.scope: Deactivated successfully.
Dec  6 04:58:01 np0005548915 python3.9[230869]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:58:01 np0005548915 podman[230909]: 2025-12-06 09:58:01.747356503 +0000 UTC m=+0.052176778 container create 9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:58:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:01 np0005548915 systemd[1]: Started libpod-conmon-9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532.scope.
Dec  6 04:58:01 np0005548915 podman[230909]: 2025-12-06 09:58:01.730605221 +0000 UTC m=+0.035425516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:58:01 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:58:01 np0005548915 podman[230909]: 2025-12-06 09:58:01.84730344 +0000 UTC m=+0.152123735 container init 9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:58:01 np0005548915 podman[230909]: 2025-12-06 09:58:01.860705132 +0000 UTC m=+0.165525447 container start 9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  6 04:58:01 np0005548915 podman[230909]: 2025-12-06 09:58:01.864958016 +0000 UTC m=+0.169778311 container attach 9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 04:58:01 np0005548915 gallant_stonebraker[230950]: 167 167
Dec  6 04:58:01 np0005548915 systemd[1]: libpod-9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532.scope: Deactivated successfully.
Dec  6 04:58:01 np0005548915 podman[230909]: 2025-12-06 09:58:01.8673216 +0000 UTC m=+0.172141885 container died 9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 04:58:01 np0005548915 systemd[1]: var-lib-containers-storage-overlay-24783a5569f7ae41d9bc886eb9762347e9983a9d8f3db4787c570a9a8c107c60-merged.mount: Deactivated successfully.
Dec  6 04:58:01 np0005548915 podman[230909]: 2025-12-06 09:58:01.906056236 +0000 UTC m=+0.210876531 container remove 9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:58:01 np0005548915 systemd[1]: libpod-conmon-9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532.scope: Deactivated successfully.
Dec  6 04:58:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:02 np0005548915 podman[231040]: 2025-12-06 09:58:02.138600531 +0000 UTC m=+0.079555147 container create e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:58:02 np0005548915 systemd[1]: Started libpod-conmon-e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3.scope.
Dec  6 04:58:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:58:02 np0005548915 podman[231040]: 2025-12-06 09:58:02.10856179 +0000 UTC m=+0.049516446 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:58:02 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:58:02 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8d765019fabf349068cfad4637ca779823692af5ef576ac75a3d49207d1bf9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:02 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8d765019fabf349068cfad4637ca779823692af5ef576ac75a3d49207d1bf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:02 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8d765019fabf349068cfad4637ca779823692af5ef576ac75a3d49207d1bf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:02 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8d765019fabf349068cfad4637ca779823692af5ef576ac75a3d49207d1bf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:02 np0005548915 podman[231040]: 2025-12-06 09:58:02.241123708 +0000 UTC m=+0.182078304 container init e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:58:02 np0005548915 podman[231040]: 2025-12-06 09:58:02.256704118 +0000 UTC m=+0.197658694 container start e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_proskuriakova, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:58:02 np0005548915 podman[231040]: 2025-12-06 09:58:02.260318896 +0000 UTC m=+0.201273482 container attach e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_proskuriakova, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  6 04:58:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:58:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:02.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:58:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:02.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:02 np0005548915 python3.9[231124]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]: {
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:    "1": [
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:        {
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:            "devices": [
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:                "/dev/loop3"
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:            ],
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:            "lv_name": "ceph_lv0",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:            "lv_size": "21470642176",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:            "name": "ceph_lv0",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:            "tags": {
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:                "ceph.cluster_name": "ceph",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:                "ceph.crush_device_class": "",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:                "ceph.encrypted": "0",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:                "ceph.osd_id": "1",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:                "ceph.type": "block",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:                "ceph.vdo": "0",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:                "ceph.with_tpm": "0"
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:            },
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:            "type": "block",
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:            "vg_name": "ceph_vg0"
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:        }
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]:    ]
Dec  6 04:58:02 np0005548915 happy_proskuriakova[231091]: }
Dec  6 04:58:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:02 np0005548915 systemd[1]: libpod-e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3.scope: Deactivated successfully.
Dec  6 04:58:02 np0005548915 podman[231040]: 2025-12-06 09:58:02.629920899 +0000 UTC m=+0.570875475 container died e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_proskuriakova, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:58:02 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6b8d765019fabf349068cfad4637ca779823692af5ef576ac75a3d49207d1bf9-merged.mount: Deactivated successfully.
Dec  6 04:58:02 np0005548915 podman[231040]: 2025-12-06 09:58:02.688188801 +0000 UTC m=+0.629143417 container remove e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:58:02 np0005548915 systemd[1]: libpod-conmon-e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3.scope: Deactivated successfully.
Dec  6 04:58:03 np0005548915 python3.9[231270]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:58:03 np0005548915 podman[231362]: 2025-12-06 09:58:03.478692923 +0000 UTC m=+0.068177541 container create a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  6 04:58:03 np0005548915 systemd[1]: Started libpod-conmon-a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8.scope.
Dec  6 04:58:03 np0005548915 podman[231362]: 2025-12-06 09:58:03.45007108 +0000 UTC m=+0.039555538 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:58:03 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:58:03 np0005548915 podman[231362]: 2025-12-06 09:58:03.603196793 +0000 UTC m=+0.192681251 container init a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  6 04:58:03 np0005548915 podman[231362]: 2025-12-06 09:58:03.616300857 +0000 UTC m=+0.205785265 container start a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  6 04:58:03 np0005548915 podman[231362]: 2025-12-06 09:58:03.620761376 +0000 UTC m=+0.210245784 container attach a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:58:03 np0005548915 romantic_visvesvaraya[231413]: 167 167
Dec  6 04:58:03 np0005548915 systemd[1]: libpod-a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8.scope: Deactivated successfully.
Dec  6 04:58:03 np0005548915 podman[231362]: 2025-12-06 09:58:03.625129025 +0000 UTC m=+0.214613433 container died a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  6 04:58:03 np0005548915 systemd[1]: var-lib-containers-storage-overlay-84d00063dfd1bec23dd70f230e633d20743d0f9fd0b63edb6a18704ee973c78a-merged.mount: Deactivated successfully.
Dec  6 04:58:03 np0005548915 podman[231362]: 2025-12-06 09:58:03.675471843 +0000 UTC m=+0.264956201 container remove a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:58:03 np0005548915 systemd[1]: libpod-conmon-a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8.scope: Deactivated successfully.
Dec  6 04:58:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:03 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:03 np0005548915 podman[231507]: 2025-12-06 09:58:03.906464156 +0000 UTC m=+0.058828728 container create 92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:58:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:03 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:03 np0005548915 systemd[1]: Started libpod-conmon-92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69.scope.
Dec  6 04:58:03 np0005548915 python3.9[231501]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:58:03 np0005548915 podman[231507]: 2025-12-06 09:58:03.882684345 +0000 UTC m=+0.035048947 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:58:04 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:58:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d5d2869b1c10c0be652d310010fcd8f1d40cd1f7c2dd4cc86500e51ac91111/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d5d2869b1c10c0be652d310010fcd8f1d40cd1f7c2dd4cc86500e51ac91111/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d5d2869b1c10c0be652d310010fcd8f1d40cd1f7c2dd4cc86500e51ac91111/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d5d2869b1c10c0be652d310010fcd8f1d40cd1f7c2dd4cc86500e51ac91111/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:04 np0005548915 podman[231507]: 2025-12-06 09:58:04.034028979 +0000 UTC m=+0.186393551 container init 92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_kare, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  6 04:58:04 np0005548915 podman[231507]: 2025-12-06 09:58:04.04296953 +0000 UTC m=+0.195334102 container start 92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_kare, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:58:04 np0005548915 podman[231507]: 2025-12-06 09:58:04.046657329 +0000 UTC m=+0.199021911 container attach 92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_kare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 04:58:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 04:58:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:04.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:58:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:04.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:58:04 np0005548915 python3.9[231616]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:58:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:04 np0005548915 lvm[231745]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:58:04 np0005548915 lvm[231745]: VG ceph_vg0 finished
Dec  6 04:58:04 np0005548915 serene_kare[231523]: {}
Dec  6 04:58:04 np0005548915 lvm[231761]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:58:04 np0005548915 lvm[231761]: VG ceph_vg0 finished
Dec  6 04:58:04 np0005548915 systemd[1]: libpod-92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69.scope: Deactivated successfully.
Dec  6 04:58:04 np0005548915 systemd[1]: libpod-92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69.scope: Consumed 1.402s CPU time.
Dec  6 04:58:04 np0005548915 podman[231507]: 2025-12-06 09:58:04.932717509 +0000 UTC m=+1.085082081 container died 92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_kare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:58:04 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a4d5d2869b1c10c0be652d310010fcd8f1d40cd1f7c2dd4cc86500e51ac91111-merged.mount: Deactivated successfully.
Dec  6 04:58:04 np0005548915 podman[231507]: 2025-12-06 09:58:04.994844386 +0000 UTC m=+1.147208958 container remove 92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:58:05 np0005548915 systemd[1]: libpod-conmon-92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69.scope: Deactivated successfully.
Dec  6 04:58:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:58:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:58:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:58:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:58:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:58:05 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:58:05 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:58:05 np0005548915 python3.9[231860]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:58:05 np0005548915 systemd[1]: Reloading.
Dec  6 04:58:05 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:58:05 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:58:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:05 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:05 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:58:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:06.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:06.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:06 np0005548915 python3.9[232062]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:58:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:07.079Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:58:07 np0005548915 python3.9[232141]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:58:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:07 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:07 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:58:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:08.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:58:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:08.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:58:08 np0005548915 python3.9[232294]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:58:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:58:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:58:08 np0005548915 python3.9[232372]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:58:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:09 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:09 np0005548915 python3.9[232551]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:58:09 np0005548915 systemd[1]: Reloading.
Dec  6 04:58:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:09 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:10 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:58:10 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:58:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:58:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:10.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:10.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:10 np0005548915 systemd[1]: Starting Create netns directory...
Dec  6 04:58:10 np0005548915 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  6 04:58:10 np0005548915 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  6 04:58:10 np0005548915 systemd[1]: Finished Create netns directory.
Dec  6 04:58:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:10] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:58:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:10] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:58:11 np0005548915 python3.9[232744]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:58:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:11 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:11 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:58:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:12.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:58:12 np0005548915 python3.9[232897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:58:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:12.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:12 np0005548915 python3.9[233020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015091.7682977-1271-159879340235732/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:58:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:13 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:13 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:14 np0005548915 python3.9[233176]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:58:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:58:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:14.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:58:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:14.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:14 np0005548915 python3.9[233328]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:58:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:58:15 np0005548915 python3.9[233452]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015094.3925292-1346-19356272462720/.source.json _original_basename=.wn7hfpfl follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:58:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:15 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:15 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:16.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:16.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:16 np0005548915 python3.9[233605]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:58:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:17.080Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:58:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:17 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:17 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:58:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:18.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:18.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:19 np0005548915 python3.9[234035]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec  6 04:58:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:19 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:19 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:58:20 np0005548915 python3.9[234188]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  6 04:58:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:20.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:20.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:20] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:58:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:20] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  6 04:58:21 np0005548915 python3.9[234341]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  6 04:58:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:21 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:21 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:22.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:22.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:22 np0005548915 podman[234394]: 2025-12-06 09:58:22.466435407 +0000 UTC m=+0.088410025 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 04:58:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240014e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:23 np0005548915 python3[234548]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  6 04:58:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:23 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:58:23
Dec  6 04:58:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:58:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:58:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['volumes', 'images', 'backups', '.nfs', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms']
Dec  6 04:58:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:58:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:58:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:58:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:23 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:58:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:58:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:24.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:58:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:58:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:24.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:58:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:58:24 np0005548915 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec  6 04:58:24 np0005548915 podman[234562]: 2025-12-06 09:58:24.578895435 +0000 UTC m=+1.099786814 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842
Dec  6 04:58:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:24 np0005548915 podman[234619]: 2025-12-06 09:58:24.686382137 +0000 UTC m=+0.022200114 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842
Dec  6 04:58:25 np0005548915 podman[234619]: 2025-12-06 09:58:25.137251217 +0000 UTC m=+0.473069214 container create a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 04:58:25 np0005548915 python3[234548]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842
Dec  6 04:58:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:58:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:25 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:25 np0005548915 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  6 04:58:25 np0005548915 python3.9[234812]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:58:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:25 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:26.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:26.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec  6 04:58:26 np0005548915 python3.9[234967]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:58:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:27.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 04:58:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:27.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:58:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:27.081Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 04:58:27 np0005548915 python3.9[235044]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:58:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:27 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240014e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:27 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:58:28 np0005548915 python3.9[235196]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765015107.5158706-1610-162484489108603/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:58:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:28.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:28.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:28 np0005548915 podman[235272]: 2025-12-06 09:58:28.610288418 +0000 UTC m=+0.071160497 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 04:58:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:28 np0005548915 python3.9[235273]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 04:58:28 np0005548915 systemd[1]: Reloading.
Dec  6 04:58:28 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:58:28 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:58:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:29 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:29 np0005548915 python3.9[235429]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:58:29 np0005548915 systemd[1]: Reloading.
Dec  6 04:58:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:29 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:30 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:58:30 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:58:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:58:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:30 np0005548915 systemd[1]: Starting multipathd container...
Dec  6 04:58:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:30.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:30.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:30 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:58:30 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc58225c9f9d6d08a80954113af3d99ae5f8dc2b767f22a0f0f89726a1ec6a4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:30 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc58225c9f9d6d08a80954113af3d99ae5f8dc2b767f22a0f0f89726a1ec6a4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:30 np0005548915 systemd[1]: Started /usr/bin/podman healthcheck run a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a.
Dec  6 04:58:30 np0005548915 podman[235468]: 2025-12-06 09:58:30.47321045 +0000 UTC m=+0.136394820 container init a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 04:58:30 np0005548915 multipathd[235484]: + sudo -E kolla_set_configs
Dec  6 04:58:30 np0005548915 podman[235468]: 2025-12-06 09:58:30.49932835 +0000 UTC m=+0.162512700 container start a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:58:30 np0005548915 podman[235468]: multipathd
Dec  6 04:58:30 np0005548915 systemd[1]: Started multipathd container.
Dec  6 04:58:30 np0005548915 multipathd[235484]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  6 04:58:30 np0005548915 multipathd[235484]: INFO:__main__:Validating config file
Dec  6 04:58:30 np0005548915 multipathd[235484]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  6 04:58:30 np0005548915 multipathd[235484]: INFO:__main__:Writing out command to execute
Dec  6 04:58:30 np0005548915 multipathd[235484]: ++ cat /run_command
Dec  6 04:58:30 np0005548915 multipathd[235484]: + CMD='/usr/sbin/multipathd -d'
Dec  6 04:58:30 np0005548915 multipathd[235484]: + ARGS=
Dec  6 04:58:30 np0005548915 multipathd[235484]: + sudo kolla_copy_cacerts
Dec  6 04:58:30 np0005548915 podman[235491]: 2025-12-06 09:58:30.613912875 +0000 UTC m=+0.103290179 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 04:58:30 np0005548915 systemd[1]: a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a-5497f377768e607b.service: Main process exited, code=exited, status=1/FAILURE
Dec  6 04:58:30 np0005548915 systemd[1]: a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a-5497f377768e607b.service: Failed with result 'exit-code'.
Dec  6 04:58:30 np0005548915 multipathd[235484]: + [[ ! -n '' ]]
Dec  6 04:58:30 np0005548915 multipathd[235484]: + . kolla_extend_start
Dec  6 04:58:30 np0005548915 multipathd[235484]: Running command: '/usr/sbin/multipathd -d'
Dec  6 04:58:30 np0005548915 multipathd[235484]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  6 04:58:30 np0005548915 multipathd[235484]: + umask 0022
Dec  6 04:58:30 np0005548915 multipathd[235484]: + exec /usr/sbin/multipathd -d
Dec  6 04:58:30 np0005548915 multipathd[235484]: 3481.178802 | --------start up--------
Dec  6 04:58:30 np0005548915 multipathd[235484]: 3481.178822 | read /etc/multipath.conf
Dec  6 04:58:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:30 np0005548915 multipathd[235484]: 3481.185908 | path checkers start up
Dec  6 04:58:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:30] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:58:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:30] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 04:58:31 np0005548915 python3.9[235674]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:58:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:31 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:31 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:32 np0005548915 python3.9[235829]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:58:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:32.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:32.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:33 np0005548915 python3.9[235994]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:58:33 np0005548915 systemd[1]: Stopping multipathd container...
Dec  6 04:58:33 np0005548915 multipathd[235484]: 3483.699862 | exit (signal)
Dec  6 04:58:33 np0005548915 multipathd[235484]: 3483.699910 | --------shut down-------
Dec  6 04:58:33 np0005548915 systemd[1]: libpod-a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a.scope: Deactivated successfully.
Dec  6 04:58:33 np0005548915 podman[235999]: 2025-12-06 09:58:33.189046502 +0000 UTC m=+0.088409455 container died a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 04:58:33 np0005548915 systemd[1]: a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a-5497f377768e607b.timer: Deactivated successfully.
Dec  6 04:58:33 np0005548915 systemd[1]: Stopped /usr/bin/podman healthcheck run a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a.
Dec  6 04:58:33 np0005548915 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a-userdata-shm.mount: Deactivated successfully.
Dec  6 04:58:33 np0005548915 systemd[1]: var-lib-containers-storage-overlay-cdc58225c9f9d6d08a80954113af3d99ae5f8dc2b767f22a0f0f89726a1ec6a4-merged.mount: Deactivated successfully.
Dec  6 04:58:33 np0005548915 podman[235999]: 2025-12-06 09:58:33.462457716 +0000 UTC m=+0.361820639 container cleanup a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 04:58:33 np0005548915 podman[235999]: multipathd
Dec  6 04:58:33 np0005548915 podman[236029]: multipathd
Dec  6 04:58:33 np0005548915 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec  6 04:58:33 np0005548915 systemd[1]: Stopped multipathd container.
Dec  6 04:58:33 np0005548915 systemd[1]: Starting multipathd container...
Dec  6 04:58:33 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:58:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc58225c9f9d6d08a80954113af3d99ae5f8dc2b767f22a0f0f89726a1ec6a4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc58225c9f9d6d08a80954113af3d99ae5f8dc2b767f22a0f0f89726a1ec6a4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  6 04:58:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:33 np0005548915 systemd[1]: Started /usr/bin/podman healthcheck run a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a.
Dec  6 04:58:33 np0005548915 podman[236042]: 2025-12-06 09:58:33.840354971 +0000 UTC m=+0.276047037 container init a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 04:58:33 np0005548915 multipathd[236057]: + sudo -E kolla_set_configs
Dec  6 04:58:33 np0005548915 podman[236042]: 2025-12-06 09:58:33.874964382 +0000 UTC m=+0.310656418 container start a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 04:58:33 np0005548915 podman[236042]: multipathd
Dec  6 04:58:33 np0005548915 systemd[1]: Started multipathd container.
Dec  6 04:58:33 np0005548915 multipathd[236057]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  6 04:58:33 np0005548915 multipathd[236057]: INFO:__main__:Validating config file
Dec  6 04:58:33 np0005548915 multipathd[236057]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  6 04:58:33 np0005548915 multipathd[236057]: INFO:__main__:Writing out command to execute
Dec  6 04:58:33 np0005548915 multipathd[236057]: ++ cat /run_command
Dec  6 04:58:33 np0005548915 multipathd[236057]: + CMD='/usr/sbin/multipathd -d'
Dec  6 04:58:33 np0005548915 multipathd[236057]: + ARGS=
Dec  6 04:58:33 np0005548915 multipathd[236057]: + sudo kolla_copy_cacerts
Dec  6 04:58:33 np0005548915 multipathd[236057]: + [[ ! -n '' ]]
Dec  6 04:58:33 np0005548915 multipathd[236057]: + . kolla_extend_start
Dec  6 04:58:33 np0005548915 multipathd[236057]: Running command: '/usr/sbin/multipathd -d'
Dec  6 04:58:33 np0005548915 multipathd[236057]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  6 04:58:33 np0005548915 multipathd[236057]: + umask 0022
Dec  6 04:58:33 np0005548915 multipathd[236057]: + exec /usr/sbin/multipathd -d
Dec  6 04:58:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:33 np0005548915 podman[236064]: 2025-12-06 09:58:33.986606678 +0000 UTC m=+0.096155857 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec  6 04:58:33 np0005548915 multipathd[236057]: 3484.529911 | --------start up--------
Dec  6 04:58:33 np0005548915 multipathd[236057]: 3484.529933 | read /etc/multipath.conf
Dec  6 04:58:33 np0005548915 multipathd[236057]: 3484.537746 | path checkers start up
Dec  6 04:58:33 np0005548915 systemd[1]: a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a-1841828b9f18ad43.service: Main process exited, code=exited, status=1/FAILURE
Dec  6 04:58:33 np0005548915 systemd[1]: a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a-1841828b9f18ad43.service: Failed with result 'exit-code'.
Dec  6 04:58:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:34.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:34.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003fb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:34 np0005548915 python3.9[236249]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:58:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:58:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:35 np0005548915 python3.9[236403]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  6 04:58:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:36.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:36.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:36 np0005548915 systemd[1]: virtqemud.service: Deactivated successfully.
Dec  6 04:58:36 np0005548915 python3.9[236555]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec  6 04:58:36 np0005548915 kernel: Key type psk registered
Dec  6 04:58:36 np0005548915 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  6 04:58:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:37.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:58:37 np0005548915 python3.9[236720]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:58:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:37 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003fb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:37 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003fb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:58:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:38.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:38.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:38 np0005548915 python3.9[236843]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015117.1805418-1850-23942385384611/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:58:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:58:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:58:39 np0005548915 python3.9[236996]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:58:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.195617) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015120196284, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1210, "num_deletes": 256, "total_data_size": 2180583, "memory_usage": 2220464, "flush_reason": "Manual Compaction"}
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec  6 04:58:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015120229339, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 2138927, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17739, "largest_seqno": 18948, "table_properties": {"data_size": 2133279, "index_size": 3039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11449, "raw_average_key_size": 18, "raw_value_size": 2121946, "raw_average_value_size": 3455, "num_data_blocks": 137, "num_entries": 614, "num_filter_entries": 614, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015004, "oldest_key_time": 1765015004, "file_creation_time": 1765015120, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 33161 microseconds, and 7105 cpu microseconds.
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.229399) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 2138927 bytes OK
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.229430) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.233020) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.233042) EVENT_LOG_v1 {"time_micros": 1765015120233036, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.233065) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2175253, prev total WAL file size 2175253, number of live WAL files 2.
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.233956) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(2088KB)], [38(11MB)]
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015120234020, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 14472882, "oldest_snapshot_seqno": -1}
Dec  6 04:58:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:40.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:40.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4955 keys, 13987153 bytes, temperature: kUnknown
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015120377901, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13987153, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13952304, "index_size": 21363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 126079, "raw_average_key_size": 25, "raw_value_size": 13860665, "raw_average_value_size": 2797, "num_data_blocks": 876, "num_entries": 4955, "num_filter_entries": 4955, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015120, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.378283) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13987153 bytes
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.405238) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.5 rd, 97.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.8 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(13.3) write-amplify(6.5) OK, records in: 5481, records dropped: 526 output_compression: NoCompression
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.405292) EVENT_LOG_v1 {"time_micros": 1765015120405269, "job": 18, "event": "compaction_finished", "compaction_time_micros": 143981, "compaction_time_cpu_micros": 37844, "output_level": 6, "num_output_files": 1, "total_output_size": 13987153, "num_input_records": 5481, "num_output_records": 4955, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015120405987, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015120409225, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.233837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.409333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.409341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.409343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.409345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:58:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.409347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 04:58:40 np0005548915 python3.9[237149]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:58:40 np0005548915 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  6 04:58:40 np0005548915 systemd[1]: Stopped Load Kernel Modules.
Dec  6 04:58:40 np0005548915 systemd[1]: Stopping Load Kernel Modules...
Dec  6 04:58:40 np0005548915 systemd[1]: Starting Load Kernel Modules...
Dec  6 04:58:40 np0005548915 systemd[1]: Finished Load Kernel Modules.
Dec  6 04:58:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:40] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  6 04:58:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:40] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  6 04:58:41 np0005548915 python3.9[237306]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 04:58:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:41 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:41 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003fb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:42.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:42.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:43 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:43 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:44.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:44.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:44 np0005548915 systemd[1]: Reloading.
Dec  6 04:58:44 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:58:44 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:58:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003fb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:44 np0005548915 systemd[1]: Reloading.
Dec  6 04:58:44 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:58:44 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:58:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:58:45 np0005548915 systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  6 04:58:45 np0005548915 systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  6 04:58:45 np0005548915 lvm[237426]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:58:45 np0005548915 lvm[237426]: VG ceph_vg0 finished
Dec  6 04:58:45 np0005548915 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 04:58:45 np0005548915 systemd[1]: Starting man-db-cache-update.service...
Dec  6 04:58:45 np0005548915 systemd[1]: Reloading.
Dec  6 04:58:45 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:58:45 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:58:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:45 np0005548915 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 04:58:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:46.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:46.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:47.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:58:47 np0005548915 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 04:58:47 np0005548915 systemd[1]: Finished man-db-cache-update.service.
Dec  6 04:58:47 np0005548915 systemd[1]: man-db-cache-update.service: Consumed 1.942s CPU time.
Dec  6 04:58:47 np0005548915 systemd[1]: run-r60797eb8a75a421ca9fd1bcdbde47ba3.service: Deactivated successfully.
Dec  6 04:58:47 np0005548915 python3.9[238770]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 04:58:47 np0005548915 systemd[1]: Stopping Open-iSCSI...
Dec  6 04:58:47 np0005548915 iscsid[226247]: iscsid shutting down.
Dec  6 04:58:47 np0005548915 systemd[1]: iscsid.service: Deactivated successfully.
Dec  6 04:58:47 np0005548915 systemd[1]: Stopped Open-iSCSI.
Dec  6 04:58:47 np0005548915 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  6 04:58:47 np0005548915 systemd[1]: Starting Open-iSCSI...
Dec  6 04:58:47 np0005548915 systemd[1]: Started Open-iSCSI.
Dec  6 04:58:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:47 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002130 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:47 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:58:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:48.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:48.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:48 np0005548915 python3.9[238926]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 04:58:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:49 np0005548915 python3.9[239109]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:58:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:49 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:49 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002130 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:58:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:50.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:50.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:50] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  6 04:58:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:50] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  6 04:58:50 np0005548915 python3.9[239261]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 04:58:50 np0005548915 systemd[1]: Reloading.
Dec  6 04:58:51 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:58:51 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:58:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:52 np0005548915 python3.9[239448]: ansible-ansible.builtin.service_facts Invoked
Dec  6 04:58:52 np0005548915 network[239465]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 04:58:52 np0005548915 network[239466]: 'network-scripts' will be removed from distribution in near future.
Dec  6 04:58:52 np0005548915 network[239467]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 04:58:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:52.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:52.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:53 np0005548915 podman[239476]: 2025-12-06 09:58:53.202627915 +0000 UTC m=+0.115105901 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 04:58:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:53 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:58:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:58:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:58:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:58:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:58:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:58:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:58:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:58:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:58:54.230 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 04:58:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:58:54.231 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 04:58:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:58:54.231 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 04:58:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:58:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:54.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:58:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:54.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:58:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:55 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002130 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002130 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:58:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:56.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:56.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:57.086Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:58:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:57 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002130 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:58:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:58:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:58.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:58:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:58:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:58:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:58.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:58:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:58:59 np0005548915 podman[239747]: 2025-12-06 09:58:59.128819197 +0000 UTC m=+0.063104048 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 04:58:59 np0005548915 python3.9[239791]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:58:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:59 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:59:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:00 np0005548915 python3.9[239945]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:59:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:00.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:00.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:00] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  6 04:59:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:00] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  6 04:59:01 np0005548915 python3.9[240098]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:59:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004720 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:02 np0005548915 python3.9[240253]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:59:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:02.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:02.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:02 np0005548915 python3.9[240406]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:59:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:03 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:03 np0005548915 python3.9[240561]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:59:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:04 np0005548915 podman[240662]: 2025-12-06 09:59:04.351229622 +0000 UTC m=+0.087894500 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 04:59:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:04.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:04.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:04 np0005548915 python3.9[240733]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:59:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:59:05 np0005548915 python3.9[240887]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 04:59:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:05 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 04:59:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 04:59:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:59:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:59:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:06.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:06.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004760 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 04:59:06 np0005548915 python3.9[241193]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:59:07.088Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:59:07 np0005548915 podman[241387]: 2025-12-06 09:59:07.425569182 +0000 UTC m=+0.051479030 container create 7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:59:07 np0005548915 systemd[1]: Started libpod-conmon-7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c.scope.
Dec  6 04:59:07 np0005548915 podman[241387]: 2025-12-06 09:59:07.398363143 +0000 UTC m=+0.024272971 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:59:07 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:59:07 np0005548915 podman[241387]: 2025-12-06 09:59:07.521273624 +0000 UTC m=+0.147183492 container init 7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hamilton, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:59:07 np0005548915 podman[241387]: 2025-12-06 09:59:07.53360185 +0000 UTC m=+0.159511658 container start 7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec  6 04:59:07 np0005548915 podman[241387]: 2025-12-06 09:59:07.538625047 +0000 UTC m=+0.164534935 container attach 7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hamilton, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  6 04:59:07 np0005548915 quizzical_hamilton[241428]: 167 167
Dec  6 04:59:07 np0005548915 systemd[1]: libpod-7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c.scope: Deactivated successfully.
Dec  6 04:59:07 np0005548915 podman[241387]: 2025-12-06 09:59:07.542761589 +0000 UTC m=+0.168671437 container died 7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Dec  6 04:59:07 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ef427d50911734f20af4cd75c39f056991c44506908268126766b141067456e4-merged.mount: Deactivated successfully.
Dec  6 04:59:07 np0005548915 podman[241387]: 2025-12-06 09:59:07.608103765 +0000 UTC m=+0.234013613 container remove 7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  6 04:59:07 np0005548915 systemd[1]: libpod-conmon-7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c.scope: Deactivated successfully.
Dec  6 04:59:07 np0005548915 podman[241481]: 2025-12-06 09:59:07.792626512 +0000 UTC m=+0.054539343 container create e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_antonelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:59:07 np0005548915 python3.9[241468]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:07 np0005548915 systemd[1]: Started libpod-conmon-e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac.scope.
Dec  6 04:59:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:07 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:07 np0005548915 podman[241481]: 2025-12-06 09:59:07.7645923 +0000 UTC m=+0.026505081 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:59:07 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:59:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1807f8a4fa6a4bbf6ebf8fb087478ce5a9ba29aa9ddc0332189fc23279cb8f3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:59:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1807f8a4fa6a4bbf6ebf8fb087478ce5a9ba29aa9ddc0332189fc23279cb8f3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:59:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1807f8a4fa6a4bbf6ebf8fb087478ce5a9ba29aa9ddc0332189fc23279cb8f3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:59:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1807f8a4fa6a4bbf6ebf8fb087478ce5a9ba29aa9ddc0332189fc23279cb8f3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:59:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1807f8a4fa6a4bbf6ebf8fb087478ce5a9ba29aa9ddc0332189fc23279cb8f3e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 04:59:07 np0005548915 podman[241481]: 2025-12-06 09:59:07.900899147 +0000 UTC m=+0.162811978 container init e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_antonelli, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  6 04:59:07 np0005548915 podman[241481]: 2025-12-06 09:59:07.912151913 +0000 UTC m=+0.174064684 container start e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_antonelli, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  6 04:59:07 np0005548915 podman[241481]: 2025-12-06 09:59:07.918395842 +0000 UTC m=+0.180308653 container attach e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  6 04:59:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:59:08 np0005548915 elegant_antonelli[241497]: --> passed data devices: 0 physical, 1 LVM
Dec  6 04:59:08 np0005548915 elegant_antonelli[241497]: --> All data devices are unavailable
Dec  6 04:59:08 np0005548915 systemd[1]: libpod-e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac.scope: Deactivated successfully.
Dec  6 04:59:08 np0005548915 podman[241481]: 2025-12-06 09:59:08.286733158 +0000 UTC m=+0.548645979 container died e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_antonelli, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 04:59:08 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1807f8a4fa6a4bbf6ebf8fb087478ce5a9ba29aa9ddc0332189fc23279cb8f3e-merged.mount: Deactivated successfully.
Dec  6 04:59:08 np0005548915 podman[241481]: 2025-12-06 09:59:08.3387025 +0000 UTC m=+0.600615271 container remove e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:59:08 np0005548915 systemd[1]: libpod-conmon-e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac.scope: Deactivated successfully.
Dec  6 04:59:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:59:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:08.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:59:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:08.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:08 np0005548915 python3.9[241663]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:59:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:59:09 np0005548915 podman[241919]: 2025-12-06 09:59:08.999833466 +0000 UTC m=+0.048000376 container create da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_visvesvaraya, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:59:09 np0005548915 systemd[1]: Started libpod-conmon-da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4.scope.
Dec  6 04:59:09 np0005548915 podman[241919]: 2025-12-06 09:59:08.978868066 +0000 UTC m=+0.027035006 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:59:09 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:59:09 np0005548915 podman[241919]: 2025-12-06 09:59:09.109724135 +0000 UTC m=+0.157891075 container init da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_visvesvaraya, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:59:09 np0005548915 podman[241919]: 2025-12-06 09:59:09.119446649 +0000 UTC m=+0.167613549 container start da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_visvesvaraya, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  6 04:59:09 np0005548915 podman[241919]: 2025-12-06 09:59:09.124008463 +0000 UTC m=+0.172175383 container attach da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Dec  6 04:59:09 np0005548915 peaceful_visvesvaraya[241937]: 167 167
Dec  6 04:59:09 np0005548915 systemd[1]: libpod-da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4.scope: Deactivated successfully.
Dec  6 04:59:09 np0005548915 podman[241919]: 2025-12-06 09:59:09.129602315 +0000 UTC m=+0.177769235 container died da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  6 04:59:09 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a097b1fe085ed5cd0054c0c5cdb83eec5720f7edda4dbbb59ad91fa0265a62da-merged.mount: Deactivated successfully.
Dec  6 04:59:09 np0005548915 podman[241919]: 2025-12-06 09:59:09.177321823 +0000 UTC m=+0.225488763 container remove da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  6 04:59:09 np0005548915 systemd[1]: libpod-conmon-da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4.scope: Deactivated successfully.
Dec  6 04:59:09 np0005548915 python3.9[241921]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:09 np0005548915 podman[241986]: 2025-12-06 09:59:09.377107174 +0000 UTC m=+0.053767742 container create 771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_yalow, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec  6 04:59:09 np0005548915 systemd[1]: Started libpod-conmon-771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57.scope.
Dec  6 04:59:09 np0005548915 podman[241986]: 2025-12-06 09:59:09.357258335 +0000 UTC m=+0.033918933 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:59:09 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:59:09 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b7125950458ad9ed11567744bf910efee27eff353ec30ce191e9fd72e8e1fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:59:09 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b7125950458ad9ed11567744bf910efee27eff353ec30ce191e9fd72e8e1fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:59:09 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b7125950458ad9ed11567744bf910efee27eff353ec30ce191e9fd72e8e1fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:59:09 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b7125950458ad9ed11567744bf910efee27eff353ec30ce191e9fd72e8e1fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:59:09 np0005548915 podman[241986]: 2025-12-06 09:59:09.475927491 +0000 UTC m=+0.152588079 container init 771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_yalow, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:59:09 np0005548915 podman[241986]: 2025-12-06 09:59:09.48654982 +0000 UTC m=+0.163210388 container start 771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  6 04:59:09 np0005548915 podman[241986]: 2025-12-06 09:59:09.495593656 +0000 UTC m=+0.172254224 container attach 771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:59:09 np0005548915 zen_yalow[242056]: {
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:    "1": [
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:        {
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:            "devices": [
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:                "/dev/loop3"
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:            ],
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:            "lv_name": "ceph_lv0",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:            "lv_size": "21470642176",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:            "name": "ceph_lv0",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:            "tags": {
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:                "ceph.cephx_lockbox_secret": "",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:                "ceph.cluster_name": "ceph",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:                "ceph.crush_device_class": "",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:                "ceph.encrypted": "0",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:                "ceph.osd_id": "1",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:                "ceph.type": "block",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:                "ceph.vdo": "0",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:                "ceph.with_tpm": "0"
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:            },
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:            "type": "block",
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:            "vg_name": "ceph_vg0"
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:        }
Dec  6 04:59:09 np0005548915 zen_yalow[242056]:    ]
Dec  6 04:59:09 np0005548915 zen_yalow[242056]: }
Dec  6 04:59:09 np0005548915 systemd[1]: libpod-771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57.scope: Deactivated successfully.
Dec  6 04:59:09 np0005548915 podman[241986]: 2025-12-06 09:59:09.800748043 +0000 UTC m=+0.477408611 container died 771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Dec  6 04:59:09 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c7b7125950458ad9ed11567744bf910efee27eff353ec30ce191e9fd72e8e1fd-merged.mount: Deactivated successfully.
Dec  6 04:59:09 np0005548915 podman[241986]: 2025-12-06 09:59:09.848179082 +0000 UTC m=+0.524839650 container remove 771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 04:59:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:09 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:09 np0005548915 systemd[1]: libpod-conmon-771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57.scope: Deactivated successfully.
Dec  6 04:59:09 np0005548915 python3.9[242161]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:59:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:10.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:10.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:10 np0005548915 podman[242423]: 2025-12-06 09:59:10.432603203 +0000 UTC m=+0.046569517 container create c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_satoshi, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 04:59:10 np0005548915 systemd[1]: Started libpod-conmon-c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca.scope.
Dec  6 04:59:10 np0005548915 python3.9[242407]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:10 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:59:10 np0005548915 podman[242423]: 2025-12-06 09:59:10.409726381 +0000 UTC m=+0.023692685 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:59:10 np0005548915 podman[242423]: 2025-12-06 09:59:10.521623404 +0000 UTC m=+0.135589708 container init c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_satoshi, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 04:59:10 np0005548915 podman[242423]: 2025-12-06 09:59:10.531770669 +0000 UTC m=+0.145736953 container start c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_satoshi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:59:10 np0005548915 podman[242423]: 2025-12-06 09:59:10.535213713 +0000 UTC m=+0.149179997 container attach c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:59:10 np0005548915 boring_satoshi[242439]: 167 167
Dec  6 04:59:10 np0005548915 systemd[1]: libpod-c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca.scope: Deactivated successfully.
Dec  6 04:59:10 np0005548915 podman[242461]: 2025-12-06 09:59:10.585096829 +0000 UTC m=+0.032374141 container died c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_satoshi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 04:59:10 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7a05893966a1e760099750fd8cc9a9f0b93a5af647543e0d710761ddfe03f1d9-merged.mount: Deactivated successfully.
Dec  6 04:59:10 np0005548915 podman[242461]: 2025-12-06 09:59:10.620948344 +0000 UTC m=+0.068225626 container remove c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_satoshi, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  6 04:59:10 np0005548915 systemd[1]: libpod-conmon-c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca.scope: Deactivated successfully.
Dec  6 04:59:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:10 np0005548915 podman[242514]: 2025-12-06 09:59:10.890164304 +0000 UTC m=+0.053536297 container create 4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 04:59:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:10] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:59:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:10] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:59:10 np0005548915 systemd[1]: Started libpod-conmon-4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461.scope.
Dec  6 04:59:10 np0005548915 podman[242514]: 2025-12-06 09:59:10.870734205 +0000 UTC m=+0.034106218 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 04:59:10 np0005548915 systemd[1]: Started libcrun container.
Dec  6 04:59:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731972f4b213a4afbb82074f2109006d27761757140cf9dd52cc5b3f97302def/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 04:59:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731972f4b213a4afbb82074f2109006d27761757140cf9dd52cc5b3f97302def/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 04:59:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731972f4b213a4afbb82074f2109006d27761757140cf9dd52cc5b3f97302def/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 04:59:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731972f4b213a4afbb82074f2109006d27761757140cf9dd52cc5b3f97302def/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 04:59:11 np0005548915 podman[242514]: 2025-12-06 09:59:11.000335219 +0000 UTC m=+0.163707242 container init 4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noyce, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  6 04:59:11 np0005548915 podman[242514]: 2025-12-06 09:59:11.00991914 +0000 UTC m=+0.173291133 container start 4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noyce, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 04:59:11 np0005548915 podman[242514]: 2025-12-06 09:59:11.013947879 +0000 UTC m=+0.177319902 container attach 4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noyce, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:59:11 np0005548915 python3.9[242647]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:11 np0005548915 lvm[242812]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 04:59:11 np0005548915 lvm[242812]: VG ceph_vg0 finished
Dec  6 04:59:11 np0005548915 vigilant_noyce[242560]: {}
Dec  6 04:59:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:11 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:11 np0005548915 systemd[1]: libpod-4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461.scope: Deactivated successfully.
Dec  6 04:59:11 np0005548915 podman[242514]: 2025-12-06 09:59:11.88781701 +0000 UTC m=+1.051189033 container died 4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noyce, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  6 04:59:11 np0005548915 systemd[1]: libpod-4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461.scope: Consumed 1.416s CPU time.
Dec  6 04:59:11 np0005548915 systemd[1]: var-lib-containers-storage-overlay-731972f4b213a4afbb82074f2109006d27761757140cf9dd52cc5b3f97302def-merged.mount: Deactivated successfully.
Dec  6 04:59:11 np0005548915 podman[242514]: 2025-12-06 09:59:11.950238897 +0000 UTC m=+1.113610930 container remove 4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 04:59:11 np0005548915 systemd[1]: libpod-conmon-4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461.scope: Deactivated successfully.
Dec  6 04:59:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 04:59:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 04:59:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:12 np0005548915 python3.9[242873]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:12.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:12.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:12 np0005548915 python3.9[243056]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:13 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:13 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 04:59:13 np0005548915 python3.9[243209]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:13 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300047a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:14 np0005548915 python3.9[243362]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:14.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:14.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:14 np0005548915 python3.9[243514]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:59:15 np0005548915 python3.9[243668]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:15 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:16.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:16 np0005548915 python3.9[243821]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:16.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754001320 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:59:17.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:59:17 np0005548915 python3.9[243973]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:17 np0005548915 python3.9[244127]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:17 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:59:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:18.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:59:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:18.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:59:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c002830 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:18 np0005548915 python3.9[244280]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:59:19 np0005548915 python3.9[244434]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  6 04:59:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:19 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754001320 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:59:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:20.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:20.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:20 np0005548915 python3.9[244586]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 04:59:20 np0005548915 systemd[1]: Reloading.
Dec  6 04:59:20 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 04:59:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:20] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:59:20 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 04:59:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:20] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 04:59:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:21 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c002830 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:21 np0005548915 python3.9[244775]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:59:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754001320 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:59:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:22.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:59:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:22.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:22 np0005548915 python3.9[244928]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:59:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:23 np0005548915 python3.9[245082]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:59:23 np0005548915 podman[245084]: 2025-12-06 09:59:23.435342284 +0000 UTC m=+0.096262588 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  6 04:59:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:59:23
Dec  6 04:59:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 04:59:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 04:59:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', 'vms', '.nfs', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data']
Dec  6 04:59:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 04:59:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:23 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:59:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:59:23 np0005548915 python3.9[245263]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:59:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:59:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:59:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c002830 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:24.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:24.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 04:59:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 04:59:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754001320 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:24 np0005548915 python3.9[245416]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:59:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:59:25 np0005548915 python3.9[245570]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:59:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:25 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:26 np0005548915 python3.9[245724]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:59:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:59:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:26.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:59:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:26.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c002830 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:26 np0005548915 python3.9[245877]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 04:59:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:59:27.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:59:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:27 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c002830 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:59:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:28.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:28.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:28 np0005548915 python3.9[246032]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:29 np0005548915 podman[246185]: 2025-12-06 09:59:29.245159751 +0000 UTC m=+0.057073552 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec  6 04:59:29 np0005548915 python3.9[246186]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:29 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754008dc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:30 np0005548915 python3.9[246382]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:59:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:59:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:30.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:59:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:30.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:30] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  6 04:59:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:30] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  6 04:59:30 np0005548915 python3.9[246534]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:31 np0005548915 python3.9[246688]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:31 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:32 np0005548915 python3.9[246840]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:32.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:32.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754008dc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:32 np0005548915 python3.9[246992]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:33 np0005548915 python3.9[247146]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:34 np0005548915 python3.9[247298]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:34.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:34.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:34 np0005548915 podman[247422]: 2025-12-06 09:59:34.75749909 +0000 UTC m=+0.088776255 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 04:59:34 np0005548915 python3.9[247468]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:59:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:36.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:36.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:59:37.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:59:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 04:59:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9187 writes, 35K keys, 9187 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9187 writes, 2104 syncs, 4.37 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 776 writes, 1212 keys, 776 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s#012Interval WAL: 776 writes, 372 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  6 04:59:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:37 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754008dc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 04:59:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 04:59:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:38.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 04:59:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:38.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:59:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:59:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754008dc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:59:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:40.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:40.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:40] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  6 04:59:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:40] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  6 04:59:40 np0005548915 python3.9[247629]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec  6 04:59:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:41 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:42 np0005548915 python3.9[247784]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  6 04:59:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:42.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:42.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:43 np0005548915 python3.9[247942]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  6 04:59:43 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 04:59:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:43 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:44 np0005548915 systemd-logind[795]: New session 55 of user zuul.
Dec  6 04:59:44 np0005548915 systemd[1]: Started Session 55 of User zuul.
Dec  6 04:59:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:44.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:44.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:44 np0005548915 systemd[1]: session-55.scope: Deactivated successfully.
Dec  6 04:59:44 np0005548915 systemd-logind[795]: Session 55 logged out. Waiting for processes to exit.
Dec  6 04:59:44 np0005548915 systemd-logind[795]: Removed session 55.
Dec  6 04:59:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:59:45 np0005548915 python3.9[248132]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:59:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754008dc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:45 np0005548915 python3.9[248254]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015184.768256-3433-224706275421914/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 04:59:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:46.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:46.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:46 np0005548915 python3.9[248404]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:59:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:46 np0005548915 python3.9[248480]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:59:47.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:59:47 np0005548915 python3.9[248632]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:59:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:47 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754008dc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 04:59:48 np0005548915 python3.9[248754]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015187.1660318-3433-22739062391989/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:48.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:48.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:48 np0005548915 python3.9[248904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:59:49 np0005548915 python3.9[249026]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015188.4549277-3433-228415729516958/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:49 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:50 np0005548915 python3.9[249202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:59:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:59:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:59:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:50.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:50.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:50 np0005548915 python3.9[249326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015189.6680846-3433-222025138862778/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:50] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  6 04:59:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:50] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  6 04:59:51 np0005548915 python3.9[249476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:59:51 np0005548915 python3.9[249599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015190.7157543-3433-6976051288526/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095951 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 04:59:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:59:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:52.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:52.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:53 np0005548915 python3.9[249751]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:53 np0005548915 podman[249877]: 2025-12-06 09:59:53.763753155 +0000 UTC m=+0.125435231 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 04:59:53 np0005548915 python3.9[249920]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 04:59:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:53 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 04:59:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 04:59:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:59:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:59:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:59:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:59:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 04:59:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 04:59:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:59:54.232 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 04:59:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:59:54.233 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 04:59:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 09:59:54.233 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 04:59:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:59:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:54.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:54.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:54 np0005548915 python3.9[250082]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:59:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 04:59:55 np0005548915 python3.9[250235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:59:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:55 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:56 np0005548915 python3.9[250359]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765015194.9023266-3754-56958980149764/.source _original_basename=.v9d0ml7r follow=False checksum=f3bb099f0d435dfeb4ffdbf95408e4ce4967ba08 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec  6 04:59:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 04:59:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:56.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:56.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:57 np0005548915 python3.9[250511]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 04:59:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:59:57.092Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 04:59:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:57 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:57 np0005548915 python3.9[250665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:59:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec  6 04:59:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 04:59:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:58.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 04:59:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 04:59:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 04:59:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:58.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 04:59:58 np0005548915 python3.9[250786]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015197.3150108-3832-233630314235761/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=81f1f28d070b2613355f782b83a5777fdba9540e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 04:59:59 np0005548915 python3.9[250937]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 04:59:59 np0005548915 podman[250981]: 2025-12-06 09:59:59.467035335 +0000 UTC m=+0.085791763 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 04:59:59 np0005548915 python3.9[251078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015198.7267964-3877-124238013007772/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=2efe6ae78bce1c26d2c384be079fa366810076ad backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 04:59:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:59 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 04:59:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:59 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:00 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec  6 05:00:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:00:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:00:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:00:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:00.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:00:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:00.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:00 np0005548915 ceph-mon[74327]: overall HEALTH_OK
Dec  6 05:00:00 np0005548915 python3.9[251230]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec  6 05:00:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:00] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 05:00:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:00] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 05:00:01 np0005548915 python3.9[251384]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  6 05:00:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:00:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:02.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:00:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:02.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:00:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:02 np0005548915 python3[251536]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec  6 05:00:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:00:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:00:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:03 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 05:00:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:00:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:04.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:00:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:04.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:00:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:05 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 05:00:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:05 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 05:00:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:06.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:00:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:06.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:00:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:07.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:00:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:07.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:00:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:07.094Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:00:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:07 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 05:00:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:00:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:08.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:00:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:08.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:00:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:00:09 np0005548915 podman[251592]: 2025-12-06 10:00:09.34551541 +0000 UTC m=+4.263087914 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 05:00:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:09 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec  6 05:00:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:10.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:10.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:10] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  6 05:00:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:10] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  6 05:00:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100011 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 05:00:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:11 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec  6 05:00:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:00:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:12.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:00:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:12.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:00:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:13 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 05:00:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:14.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:14.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750004430 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 05:00:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:15 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:00:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:16.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:00:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:16.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:00:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:17.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:00:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:17.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:00:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:17.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:00:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:00:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:00:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 05:00:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:00:17 np0005548915 podman[251550]: 2025-12-06 10:00:17.692974484 +0000 UTC m=+14.728794853 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5
Dec  6 05:00:17 np0005548915 podman[251768]: 2025-12-06 10:00:17.841350429 +0000 UTC m=+0.049041434 container create 60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, container_name=nova_compute_init, managed_by=edpm_ansible)
Dec  6 05:00:17 np0005548915 podman[251768]: 2025-12-06 10:00:17.815572848 +0000 UTC m=+0.023263843 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5
Dec  6 05:00:17 np0005548915 python3[251536]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec  6 05:00:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:17 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750004430 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750004430 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:00:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:18.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:18.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:00:18 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:00:18 np0005548915 python3.9[252016]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 05:00:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:18 np0005548915 podman[252088]: 2025-12-06 10:00:18.930915474 +0000 UTC m=+0.053851825 container create ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_easley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:00:18 np0005548915 systemd[1]: Started libpod-conmon-ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c.scope.
Dec  6 05:00:19 np0005548915 podman[252088]: 2025-12-06 10:00:18.907805946 +0000 UTC m=+0.030742317 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:00:19 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:00:19 np0005548915 podman[252088]: 2025-12-06 10:00:19.032935418 +0000 UTC m=+0.155871779 container init ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_easley, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:00:19 np0005548915 podman[252088]: 2025-12-06 10:00:19.041290835 +0000 UTC m=+0.164227166 container start ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_easley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Dec  6 05:00:19 np0005548915 podman[252088]: 2025-12-06 10:00:19.045439568 +0000 UTC m=+0.168375929 container attach ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_easley, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 05:00:19 np0005548915 competent_easley[252104]: 167 167
Dec  6 05:00:19 np0005548915 systemd[1]: libpod-ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c.scope: Deactivated successfully.
Dec  6 05:00:19 np0005548915 podman[252088]: 2025-12-06 10:00:19.052240363 +0000 UTC m=+0.175176694 container died ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_easley, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  6 05:00:19 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d26432cbe5eaa7238276b3456e50ab7179bd7e01b4b23fc89c744f2dbfc67c56-merged.mount: Deactivated successfully.
Dec  6 05:00:19 np0005548915 podman[252088]: 2025-12-06 10:00:19.094813081 +0000 UTC m=+0.217749412 container remove ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_easley, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Dec  6 05:00:19 np0005548915 systemd[1]: libpod-conmon-ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c.scope: Deactivated successfully.
Dec  6 05:00:19 np0005548915 podman[252130]: 2025-12-06 10:00:19.268677748 +0000 UTC m=+0.047364499 container create 26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_haibt, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:00:19 np0005548915 systemd[1]: Started libpod-conmon-26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40.scope.
Dec  6 05:00:19 np0005548915 podman[252130]: 2025-12-06 10:00:19.249007613 +0000 UTC m=+0.027694394 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:00:19 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:00:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac61614630103872b935ea633ebaf6b20ed9c2fe9cf0da48f50e0bd6634779c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac61614630103872b935ea633ebaf6b20ed9c2fe9cf0da48f50e0bd6634779c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac61614630103872b935ea633ebaf6b20ed9c2fe9cf0da48f50e0bd6634779c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac61614630103872b935ea633ebaf6b20ed9c2fe9cf0da48f50e0bd6634779c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac61614630103872b935ea633ebaf6b20ed9c2fe9cf0da48f50e0bd6634779c3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:19 np0005548915 podman[252130]: 2025-12-06 10:00:19.363053924 +0000 UTC m=+0.141740695 container init 26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_haibt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:00:19 np0005548915 podman[252130]: 2025-12-06 10:00:19.37026415 +0000 UTC m=+0.148950911 container start 26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_haibt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  6 05:00:19 np0005548915 podman[252130]: 2025-12-06 10:00:19.374998779 +0000 UTC m=+0.153685540 container attach 26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:00:19 np0005548915 exciting_haibt[252173]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:00:19 np0005548915 exciting_haibt[252173]: --> All data devices are unavailable
Dec  6 05:00:19 np0005548915 systemd[1]: libpod-26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40.scope: Deactivated successfully.
Dec  6 05:00:19 np0005548915 podman[252130]: 2025-12-06 10:00:19.74505822 +0000 UTC m=+0.523744971 container died 26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_haibt, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  6 05:00:19 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ac61614630103872b935ea633ebaf6b20ed9c2fe9cf0da48f50e0bd6634779c3-merged.mount: Deactivated successfully.
Dec  6 05:00:19 np0005548915 python3.9[252279]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec  6 05:00:19 np0005548915 podman[252130]: 2025-12-06 10:00:19.792609633 +0000 UTC m=+0.571296384 container remove 26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_haibt, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  6 05:00:19 np0005548915 systemd[1]: libpod-conmon-26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40.scope: Deactivated successfully.
Dec  6 05:00:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:19 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750004430 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  6 05:00:20 np0005548915 podman[252529]: 2025-12-06 10:00:20.31181341 +0000 UTC m=+0.021908147 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:00:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:00:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:20.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:00:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:00:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:20.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:00:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:20] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  6 05:00:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:20] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  6 05:00:21 np0005548915 podman[252529]: 2025-12-06 10:00:21.274993439 +0000 UTC m=+0.985088146 container create c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_spence, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 05:00:21 np0005548915 python3.9[252555]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  6 05:00:21 np0005548915 systemd[1]: Started libpod-conmon-c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439.scope.
Dec  6 05:00:21 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:00:21 np0005548915 podman[252529]: 2025-12-06 10:00:21.367315699 +0000 UTC m=+1.077410426 container init c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_spence, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:00:21 np0005548915 podman[252529]: 2025-12-06 10:00:21.3739679 +0000 UTC m=+1.084062607 container start c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_spence, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:00:21 np0005548915 podman[252529]: 2025-12-06 10:00:21.377770684 +0000 UTC m=+1.087865391 container attach c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  6 05:00:21 np0005548915 quizzical_spence[252560]: 167 167
Dec  6 05:00:21 np0005548915 systemd[1]: libpod-c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439.scope: Deactivated successfully.
Dec  6 05:00:21 np0005548915 podman[252529]: 2025-12-06 10:00:21.380146778 +0000 UTC m=+1.090241485 container died c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  6 05:00:21 np0005548915 systemd[1]: var-lib-containers-storage-overlay-0b25d56b2be0a50fe634c11d678d2c14a5b39f1eeba087030195356adb6d38ea-merged.mount: Deactivated successfully.
Dec  6 05:00:21 np0005548915 podman[252529]: 2025-12-06 10:00:21.416308762 +0000 UTC m=+1.126403469 container remove c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_spence, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 05:00:21 np0005548915 systemd[1]: libpod-conmon-c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439.scope: Deactivated successfully.
Dec  6 05:00:21 np0005548915 podman[252609]: 2025-12-06 10:00:21.574328508 +0000 UTC m=+0.043237277 container create 3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_heisenberg, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  6 05:00:21 np0005548915 systemd[1]: Started libpod-conmon-3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59.scope.
Dec  6 05:00:21 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:00:21 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56cf2e8000dff85198a3043ef4f5bcbf536bbb1537b31c05ddbd2dfe292e1e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:21 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56cf2e8000dff85198a3043ef4f5bcbf536bbb1537b31c05ddbd2dfe292e1e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:21 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56cf2e8000dff85198a3043ef4f5bcbf536bbb1537b31c05ddbd2dfe292e1e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:21 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56cf2e8000dff85198a3043ef4f5bcbf536bbb1537b31c05ddbd2dfe292e1e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:21 np0005548915 podman[252609]: 2025-12-06 10:00:21.639257473 +0000 UTC m=+0.108166262 container init 3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_heisenberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  6 05:00:21 np0005548915 podman[252609]: 2025-12-06 10:00:21.648805243 +0000 UTC m=+0.117714012 container start 3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:00:21 np0005548915 podman[252609]: 2025-12-06 10:00:21.555331461 +0000 UTC m=+0.024240250 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:00:21 np0005548915 podman[252609]: 2025-12-06 10:00:21.651845026 +0000 UTC m=+0.120753795 container attach 3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 05:00:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:21 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]: {
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:    "1": [
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:        {
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:            "devices": [
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:                "/dev/loop3"
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:            ],
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:            "lv_name": "ceph_lv0",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:            "lv_size": "21470642176",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:            "name": "ceph_lv0",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:            "tags": {
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:                "ceph.cluster_name": "ceph",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:                "ceph.crush_device_class": "",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:                "ceph.encrypted": "0",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:                "ceph.osd_id": "1",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:                "ceph.type": "block",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:                "ceph.vdo": "0",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:                "ceph.with_tpm": "0"
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:            },
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:            "type": "block",
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:            "vg_name": "ceph_vg0"
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:        }
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]:    ]
Dec  6 05:00:21 np0005548915 friendly_heisenberg[252625]: }
Dec  6 05:00:21 np0005548915 systemd[1]: libpod-3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59.scope: Deactivated successfully.
Dec  6 05:00:21 np0005548915 podman[252609]: 2025-12-06 10:00:21.98157373 +0000 UTC m=+0.450482499 container died 3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Dec  6 05:00:22 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c56cf2e8000dff85198a3043ef4f5bcbf536bbb1537b31c05ddbd2dfe292e1e4-merged.mount: Deactivated successfully.
Dec  6 05:00:22 np0005548915 podman[252609]: 2025-12-06 10:00:22.026336087 +0000 UTC m=+0.495244856 container remove 3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  6 05:00:22 np0005548915 systemd[1]: libpod-conmon-3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59.scope: Deactivated successfully.
Dec  6 05:00:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:22 np0005548915 python3[252763]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  6 05:00:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  6 05:00:22 np0005548915 podman[252860]: 2025-12-06 10:00:22.430548639 +0000 UTC m=+0.050436433 container create 61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 05:00:22 np0005548915 podman[252860]: 2025-12-06 10:00:22.406610567 +0000 UTC m=+0.026498381 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5
Dec  6 05:00:22 np0005548915 python3[252763]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5 kolla_start
Dec  6 05:00:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:22.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:22.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:00:22 np0005548915 podman[252934]: 2025-12-06 10:00:22.595184394 +0000 UTC m=+0.044386748 container create 34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 05:00:22 np0005548915 systemd[1]: Started libpod-conmon-34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e.scope.
Dec  6 05:00:22 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:00:22 np0005548915 podman[252934]: 2025-12-06 10:00:22.575661883 +0000 UTC m=+0.024864257 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:00:22 np0005548915 podman[252934]: 2025-12-06 10:00:22.682429736 +0000 UTC m=+0.131632120 container init 34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sinoussi, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:00:22 np0005548915 podman[252934]: 2025-12-06 10:00:22.690748802 +0000 UTC m=+0.139951156 container start 34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sinoussi, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  6 05:00:22 np0005548915 podman[252934]: 2025-12-06 10:00:22.694113655 +0000 UTC m=+0.143316029 container attach 34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sinoussi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:00:22 np0005548915 beautiful_sinoussi[252974]: 167 167
Dec  6 05:00:22 np0005548915 systemd[1]: libpod-34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e.scope: Deactivated successfully.
Dec  6 05:00:22 np0005548915 podman[252934]: 2025-12-06 10:00:22.699255564 +0000 UTC m=+0.148457918 container died 34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sinoussi, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  6 05:00:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:22 np0005548915 systemd[1]: var-lib-containers-storage-overlay-2c61306d3468cea7fd5de5147df6f81c5d90abae074f9488fa33c629623dcfce-merged.mount: Deactivated successfully.
Dec  6 05:00:22 np0005548915 podman[252934]: 2025-12-06 10:00:22.744819653 +0000 UTC m=+0.194022007 container remove 34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sinoussi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  6 05:00:22 np0005548915 systemd[1]: libpod-conmon-34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e.scope: Deactivated successfully.
Dec  6 05:00:22 np0005548915 podman[253062]: 2025-12-06 10:00:22.9105736 +0000 UTC m=+0.041006256 container create ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  6 05:00:22 np0005548915 systemd[1]: Started libpod-conmon-ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84.scope.
Dec  6 05:00:22 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:00:22 np0005548915 podman[253062]: 2025-12-06 10:00:22.893566688 +0000 UTC m=+0.023999364 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:00:22 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d484470aad6b99a819a4cc1f01062da7b0a591936e15820e836a22f185af547c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:22 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d484470aad6b99a819a4cc1f01062da7b0a591936e15820e836a22f185af547c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:22 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d484470aad6b99a819a4cc1f01062da7b0a591936e15820e836a22f185af547c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:22 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d484470aad6b99a819a4cc1f01062da7b0a591936e15820e836a22f185af547c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:23 np0005548915 podman[253062]: 2025-12-06 10:00:23.007897656 +0000 UTC m=+0.138330322 container init ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:00:23 np0005548915 podman[253062]: 2025-12-06 10:00:23.017168768 +0000 UTC m=+0.147601424 container start ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  6 05:00:23 np0005548915 podman[253062]: 2025-12-06 10:00:23.020606081 +0000 UTC m=+0.151038967 container attach ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:00:23 np0005548915 python3.9[253149]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 05:00:23 np0005548915 lvm[253297]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:00:23 np0005548915 lvm[253297]: VG ceph_vg0 finished
Dec  6 05:00:23 np0005548915 hungry_joliot[253115]: {}
Dec  6 05:00:23 np0005548915 systemd[1]: libpod-ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84.scope: Deactivated successfully.
Dec  6 05:00:23 np0005548915 systemd[1]: libpod-ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84.scope: Consumed 1.265s CPU time.
Dec  6 05:00:23 np0005548915 podman[253062]: 2025-12-06 10:00:23.805392189 +0000 UTC m=+0.935824845 container died ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  6 05:00:23 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d484470aad6b99a819a4cc1f01062da7b0a591936e15820e836a22f185af547c-merged.mount: Deactivated successfully.
Dec  6 05:00:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:00:23
Dec  6 05:00:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:00:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:00:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.log', 'images', 'backups', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', '.mgr', '.nfs', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control']
Dec  6 05:00:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:00:23 np0005548915 podman[253062]: 2025-12-06 10:00:23.85099686 +0000 UTC m=+0.981429516 container remove ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:00:23 np0005548915 systemd[1]: libpod-conmon-ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84.scope: Deactivated successfully.
Dec  6 05:00:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:00:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:00:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:00:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:00:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:00:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:00:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:23 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:23 np0005548915 podman[253329]: 2025-12-06 10:00:23.975393182 +0000 UTC m=+0.130871690 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_controller)
Dec  6 05:00:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:00:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:00:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:24 np0005548915 python3.9[253413]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:00:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:00:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:00:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:24.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:00:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:00:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:24.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:00:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:24 np0005548915 python3.9[253590]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765015224.2108967-4153-162476875988314/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 05:00:24 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:00:24 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:00:25 np0005548915 python3.9[253666]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 05:00:25 np0005548915 systemd[1]: Reloading.
Dec  6 05:00:25 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 05:00:25 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 05:00:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:25 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:26 np0005548915 python3.9[253779]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 05:00:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:00:26 np0005548915 systemd[1]: Reloading.
Dec  6 05:00:26 np0005548915 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 05:00:26 np0005548915 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 05:00:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:00:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:26.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:00:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:00:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:26.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:00:26 np0005548915 systemd[1]: Starting nova_compute container...
Dec  6 05:00:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:26 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:00:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:26 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:26 np0005548915 podman[253819]: 2025-12-06 10:00:26.782612279 +0000 UTC m=+0.096025251 container init 61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 05:00:26 np0005548915 podman[253819]: 2025-12-06 10:00:26.789457276 +0000 UTC m=+0.102870238 container start 61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec  6 05:00:26 np0005548915 podman[253819]: nova_compute
Dec  6 05:00:26 np0005548915 nova_compute[253834]: + sudo -E kolla_set_configs
Dec  6 05:00:26 np0005548915 systemd[1]: Started nova_compute container.
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Validating config file
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Copying service configuration files
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Deleting /etc/ceph
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Creating directory /etc/ceph
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /etc/ceph
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Writing out command to execute
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  6 05:00:26 np0005548915 nova_compute[253834]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  6 05:00:26 np0005548915 nova_compute[253834]: ++ cat /run_command
Dec  6 05:00:26 np0005548915 nova_compute[253834]: + CMD=nova-compute
Dec  6 05:00:26 np0005548915 nova_compute[253834]: + ARGS=
Dec  6 05:00:26 np0005548915 nova_compute[253834]: + sudo kolla_copy_cacerts
Dec  6 05:00:26 np0005548915 nova_compute[253834]: + [[ ! -n '' ]]
Dec  6 05:00:26 np0005548915 nova_compute[253834]: + . kolla_extend_start
Dec  6 05:00:26 np0005548915 nova_compute[253834]: + echo 'Running command: '\''nova-compute'\'''
Dec  6 05:00:26 np0005548915 nova_compute[253834]: Running command: 'nova-compute'
Dec  6 05:00:26 np0005548915 nova_compute[253834]: + umask 0022
Dec  6 05:00:26 np0005548915 nova_compute[253834]: + exec nova-compute
Dec  6 05:00:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:27.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:00:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:27.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:00:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:00:27 np0005548915 python3.9[253998]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 05:00:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:27 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 05:00:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:28.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:28.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:28 np0005548915 python3.9[254149]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 05:00:29 np0005548915 nova_compute[253834]: 2025-12-06 10:00:29.251 253838 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  6 05:00:29 np0005548915 nova_compute[253834]: 2025-12-06 10:00:29.252 253838 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  6 05:00:29 np0005548915 nova_compute[253834]: 2025-12-06 10:00:29.252 253838 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  6 05:00:29 np0005548915 nova_compute[253834]: 2025-12-06 10:00:29.252 253838 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  6 05:00:29 np0005548915 ceph-osd[82803]: bluestore.MempoolThread fragmentation_score=0.000032 took=0.000044s
Dec  6 05:00:29 np0005548915 nova_compute[253834]: 2025-12-06 10:00:29.632 253838 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.642588) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015229643090, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1153, "num_deletes": 251, "total_data_size": 2172867, "memory_usage": 2216872, "flush_reason": "Manual Compaction"}
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec  6 05:00:29 np0005548915 nova_compute[253834]: 2025-12-06 10:00:29.653 253838 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:00:29 np0005548915 nova_compute[253834]: 2025-12-06 10:00:29.654 253838 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015229658554, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 2113460, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18950, "largest_seqno": 20101, "table_properties": {"data_size": 2107863, "index_size": 2989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11905, "raw_average_key_size": 19, "raw_value_size": 2096729, "raw_average_value_size": 3506, "num_data_blocks": 132, "num_entries": 598, "num_filter_entries": 598, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015120, "oldest_key_time": 1765015120, "file_creation_time": 1765015229, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 15604 microseconds, and 5251 cpu microseconds.
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.658616) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 2113460 bytes OK
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.658641) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.662378) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.662404) EVENT_LOG_v1 {"time_micros": 1765015229662396, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.662438) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2167701, prev total WAL file size 2167701, number of live WAL files 2.
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.663312) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(2063KB)], [41(13MB)]
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015229663347, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 16100613, "oldest_snapshot_seqno": -1}
Dec  6 05:00:29 np0005548915 podman[254277]: 2025-12-06 10:00:29.722890632 +0000 UTC m=+0.057672965 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 5033 keys, 13954463 bytes, temperature: kUnknown
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015229810975, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 13954463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13919210, "index_size": 21575, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 128256, "raw_average_key_size": 25, "raw_value_size": 13826217, "raw_average_value_size": 2747, "num_data_blocks": 885, "num_entries": 5033, "num_filter_entries": 5033, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015229, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.811200) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 13954463 bytes
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.812838) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.0 rd, 94.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 13.3 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(14.2) write-amplify(6.6) OK, records in: 5553, records dropped: 520 output_compression: NoCompression
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.812855) EVENT_LOG_v1 {"time_micros": 1765015229812848, "job": 20, "event": "compaction_finished", "compaction_time_micros": 147707, "compaction_time_cpu_micros": 27919, "output_level": 6, "num_output_files": 1, "total_output_size": 13954463, "num_input_records": 5553, "num_output_records": 5033, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015229813399, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015229815802, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.663209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.815931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.815936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.815938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.815940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:00:29 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.815941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:00:29 np0005548915 python3.9[254316]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 05:00:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:29 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.200 253838 INFO nova.virt.driver [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  6 05:00:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.329 253838 INFO nova.compute.provider_config [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.374 253838 DEBUG oslo_concurrency.lockutils [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.375 253838 DEBUG oslo_concurrency.lockutils [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.375 253838 DEBUG oslo_concurrency.lockutils [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.375 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.376 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.376 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.376 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.376 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.376 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.376 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.377 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.377 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.377 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.377 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.377 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.377 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.378 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.378 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.378 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.378 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.378 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.378 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.378 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.379 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.379 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.379 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.379 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.379 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.379 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.380 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.380 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.380 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.380 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.380 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.380 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.381 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.381 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.381 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.381 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.381 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.381 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.382 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.382 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.382 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.382 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.382 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.382 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.383 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.383 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.383 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.383 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.383 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.383 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.384 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.384 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.384 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.384 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.384 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.384 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.384 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.385 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.385 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.385 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.385 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.385 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.385 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.386 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.386 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.386 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.386 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.386 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.386 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.386 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.387 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.387 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.387 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.387 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.387 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.387 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.388 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.388 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.388 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.388 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.388 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.388 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.388 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.389 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.389 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.389 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.389 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.389 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.389 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.390 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.390 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.390 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.390 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.390 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.390 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.390 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.391 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.391 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.391 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.391 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.391 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.391 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.392 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.392 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.392 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.392 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.392 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.393 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.393 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.393 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.393 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.393 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.394 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.394 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.394 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.394 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.394 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.395 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.395 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.395 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.395 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.395 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.396 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.396 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.396 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.396 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.396 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.396 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.396 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.397 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.397 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.397 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.397 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.397 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.397 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.397 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.398 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.398 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.398 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.398 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.398 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.398 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.398 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.399 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.399 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.399 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.399 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.399 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.400 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.400 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.400 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.400 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.400 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.400 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.401 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.401 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.401 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.401 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.401 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.401 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.401 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.402 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.402 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.402 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.402 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.402 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.402 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.402 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.403 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.403 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.403 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.403 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.403 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.403 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.404 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.404 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.404 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.404 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.404 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.404 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.404 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.405 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.405 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.405 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.405 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.405 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.405 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.405 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.406 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.406 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.406 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.406 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.406 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.406 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.406 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.407 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.407 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.407 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.407 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.407 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.407 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.407 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.408 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.408 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.408 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.408 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.408 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.408 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.409 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.409 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.409 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.409 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.409 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.409 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.409 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.410 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.410 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.410 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.410 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.410 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.410 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.410 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.411 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.411 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.411 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.411 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.411 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.411 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.412 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.412 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.412 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.412 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.412 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.412 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.412 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.413 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.413 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.413 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.413 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.413 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.413 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.413 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.414 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.414 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.414 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.414 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.414 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.414 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.414 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.415 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.415 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.415 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.415 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.415 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.415 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.416 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.416 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.416 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.416 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.416 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.416 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.418 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.418 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.418 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.418 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.418 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.418 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.419 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.419 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.419 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.419 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.419 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.419 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.420 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.420 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.420 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.420 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.420 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.420 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.420 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.421 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.421 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.421 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.421 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.421 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.421 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.421 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.422 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.422 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.422 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.422 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.422 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.422 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.423 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.423 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.423 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.423 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.423 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.423 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.423 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.424 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.424 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.424 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.424 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.424 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.424 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.424 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.425 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.425 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.425 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.425 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.425 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.425 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.425 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.426 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.426 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.426 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.426 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.426 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.426 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.426 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.427 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.427 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.427 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.427 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.427 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.427 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.427 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.428 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.428 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.428 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.428 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.428 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.428 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.430 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.430 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.430 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.430 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.430 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.431 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.431 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.431 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.431 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.431 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.431 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.432 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.432 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.432 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.432 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.432 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.432 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.432 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.433 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.433 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.433 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.433 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.433 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.433 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.434 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.434 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.434 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.434 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.434 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.434 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.435 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.435 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.435 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.435 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.435 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.436 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.436 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.436 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.436 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.436 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.436 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.437 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.437 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.437 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.437 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.437 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.438 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.438 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.438 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.438 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.438 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.439 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.439 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.439 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.439 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.439 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.440 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.440 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.440 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.440 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.440 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.440 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.441 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.441 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.441 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.441 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.441 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.442 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.442 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.442 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.442 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.443 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.443 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.443 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.443 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.443 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.444 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.444 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.444 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.444 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.444 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.445 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.445 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.445 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.445 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.445 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.446 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.446 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.446 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.446 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.447 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.447 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.447 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.447 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.447 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.448 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.448 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.448 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.448 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.448 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.449 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.449 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.449 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.449 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.449 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.450 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.450 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.450 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.450 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.450 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.451 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.451 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.451 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.451 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.451 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.452 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.452 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.452 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.452 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.453 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.453 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.453 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.453 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.453 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.454 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.454 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.454 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.454 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.454 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.455 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.455 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.455 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.455 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.455 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.456 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.456 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:30.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.456 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.456 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.456 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.457 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.457 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.457 253838 WARNING oslo_config.cfg [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  6 05:00:30 np0005548915 nova_compute[253834]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  6 05:00:30 np0005548915 nova_compute[253834]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  6 05:00:30 np0005548915 nova_compute[253834]: and ``live_migration_inbound_addr`` respectively.
Dec  6 05:00:30 np0005548915 nova_compute[253834]: ).  Its value may be silently ignored in the future.#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.457 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.458 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.458 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.458 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.458 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.459 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.459 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.459 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.459 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.459 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.460 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.460 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.460 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.460 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.461 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.461 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.461 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.461 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.461 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rbd_secret_uuid        = 5ecd3f74-dade-5fc4-92ce-8950ae424258 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.462 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.462 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.462 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.462 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.462 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.463 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.463 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.463 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.463 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.463 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.464 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.464 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.464 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.464 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.464 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.465 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.465 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.465 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.465 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.465 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.466 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.466 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.466 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.466 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.466 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.467 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.467 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.467 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.467 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.468 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.468 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.468 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.468 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.468 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.469 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.469 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.469 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.469 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.469 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.470 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.470 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.470 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.471 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.471 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.471 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.471 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.471 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.472 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.472 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.472 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.472 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.472 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.473 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.473 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.473 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.473 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.473 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.474 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.474 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.474 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.474 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.474 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.475 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.475 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.475 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.475 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.476 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.476 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.476 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.476 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.477 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.477 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.477 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.477 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.477 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.478 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.478 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.478 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.478 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.478 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.478 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.479 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.479 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.479 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.479 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.479 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.480 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.480 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.480 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.480 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.480 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.481 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.482 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.482 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.482 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.483 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.483 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.483 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.483 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.483 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.483 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.484 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.484 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.484 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.484 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.484 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.484 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.485 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.485 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.485 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.485 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.485 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.485 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.486 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.486 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.486 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.486 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.486 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.486 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.486 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.487 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.487 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.487 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.487 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.487 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.488 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.488 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.488 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.488 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.488 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.489 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.489 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.489 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.489 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.489 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.489 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.490 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.490 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.490 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.490 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.490 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.491 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.491 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.491 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.491 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.491 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.492 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.492 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.492 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.492 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.493 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.493 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.493 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.493 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.493 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.493 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.494 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.494 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.494 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.494 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.494 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.494 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.494 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.495 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.495 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.495 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.495 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.495 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.495 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.497 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.497 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.497 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.497 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.497 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.497 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.498 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.498 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.498 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.498 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.498 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.498 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.498 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.499 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.499 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.499 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.499 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.499 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.499 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.500 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.500 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.500 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.500 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.500 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.500 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.500 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.501 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.501 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.501 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.501 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.501 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.501 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.501 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.502 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.502 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.502 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.502 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.502 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.502 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.502 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.503 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.503 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.503 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.503 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.503 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.503 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.503 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.504 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.504 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.504 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.504 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.504 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.504 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.505 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.505 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.505 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.505 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.505 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.505 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.506 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.506 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.506 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.506 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.506 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.506 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.506 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.507 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.507 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.507 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.507 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.507 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.507 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.508 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.508 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.508 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.508 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.508 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.508 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.509 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.509 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.509 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.509 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.509 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.509 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.509 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.510 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.510 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.510 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.510 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.510 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.510 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.510 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.511 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.511 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.511 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.511 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.511 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.511 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:30.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.511 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.512 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.512 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.512 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.512 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.512 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.512 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.513 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.513 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.513 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.513 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.513 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.513 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.514 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.514 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.514 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.514 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.514 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.514 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.514 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.515 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.515 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.515 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.515 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.515 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.515 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.515 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.516 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.516 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.516 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.516 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.516 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.516 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.516 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.517 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.517 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.517 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.517 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.517 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.517 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.517 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.518 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.518 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.518 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.518 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.518 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.518 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.518 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.519 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.519 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.519 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.519 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.519 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.519 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.519 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.520 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.520 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.520 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.520 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.520 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.520 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.520 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.522 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.522 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.522 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.522 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.522 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.522 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.522 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.523 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.523 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.523 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.523 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.523 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.523 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.523 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.524 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.524 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.524 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.524 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.524 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.524 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.524 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.525 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.525 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.525 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.525 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.525 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.525 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.526 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.526 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.526 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.526 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.526 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.526 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.526 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.527 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.527 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.527 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.527 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.527 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.527 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.527 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.528 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.528 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.528 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.528 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.528 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.528 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.528 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.529 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.529 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.529 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.529 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.529 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.529 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.529 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.530 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.530 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.530 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.530 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.530 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.531 253838 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.544 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.545 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.545 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.545 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  6 05:00:30 np0005548915 systemd[1]: Starting libvirt QEMU daemon...
Dec  6 05:00:30 np0005548915 systemd[1]: Started libvirt QEMU daemon.
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.639 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f98b5ada460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.642 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f98b5ada460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.643 253838 INFO nova.virt.libvirt.driver [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.662 253838 WARNING nova.virt.libvirt.driver [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  6 05:00:30 np0005548915 nova_compute[253834]: 2025-12-06 10:00:30.662 253838 DEBUG nova.virt.libvirt.volume.mount [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  6 05:00:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:30] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  6 05:00:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:30] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  6 05:00:31 np0005548915 python3.9[254551]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  6 05:00:31 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 05:00:31 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.468 253838 INFO nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Libvirt host capabilities <capabilities>
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <host>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <uuid>cc5c2b35-ce1b-4acf-9906-7bdc7897f14e</uuid>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <cpu>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <arch>x86_64</arch>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model>EPYC-Rome-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <vendor>AMD</vendor>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <microcode version='16777317'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <signature family='23' model='49' stepping='0'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='x2apic'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='tsc-deadline'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='osxsave'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='hypervisor'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='tsc_adjust'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='spec-ctrl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='stibp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='arch-capabilities'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='ssbd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='cmp_legacy'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='topoext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='virt-ssbd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='lbrv'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='tsc-scale'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='vmcb-clean'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='pause-filter'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='pfthreshold'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='svme-addr-chk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='rdctl-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='skip-l1dfl-vmentry'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='mds-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature name='pschange-mc-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <pages unit='KiB' size='4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <pages unit='KiB' size='2048'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <pages unit='KiB' size='1048576'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </cpu>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <power_management>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <suspend_mem/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </power_management>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <iommu support='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <migration_features>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <live/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <uri_transports>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <uri_transport>tcp</uri_transport>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <uri_transport>rdma</uri_transport>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </uri_transports>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </migration_features>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <topology>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <cells num='1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <cell id='0'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:          <memory unit='KiB'>7864320</memory>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:          <pages unit='KiB' size='4'>1966080</pages>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:          <pages unit='KiB' size='2048'>0</pages>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:          <distances>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:            <sibling id='0' value='10'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:          </distances>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:          <cpus num='8'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:          </cpus>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        </cell>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </cells>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </topology>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <cache>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </cache>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <secmodel>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model>selinux</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <doi>0</doi>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </secmodel>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <secmodel>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model>dac</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <doi>0</doi>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </secmodel>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </host>
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <guest>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <os_type>hvm</os_type>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <arch name='i686'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <wordsize>32</wordsize>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <domain type='qemu'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <domain type='kvm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </arch>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <features>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <pae/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <nonpae/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <acpi default='on' toggle='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <apic default='on' toggle='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <cpuselection/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <deviceboot/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <disksnapshot default='on' toggle='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <externalSnapshot/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </features>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </guest>
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <guest>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <os_type>hvm</os_type>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <arch name='x86_64'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <wordsize>64</wordsize>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <domain type='qemu'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <domain type='kvm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </arch>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <features>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <acpi default='on' toggle='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <apic default='on' toggle='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <cpuselection/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <deviceboot/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <disksnapshot default='on' toggle='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <externalSnapshot/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </features>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </guest>
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 
Dec  6 05:00:31 np0005548915 nova_compute[253834]: </capabilities>
Dec  6 05:00:31 np0005548915 nova_compute[253834]: #033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.479 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.505 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  6 05:00:31 np0005548915 nova_compute[253834]: <domainCapabilities>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <path>/usr/libexec/qemu-kvm</path>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <domain>kvm</domain>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <arch>i686</arch>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <vcpu max='240'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <iothreads supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <os supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <enum name='firmware'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <loader supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>rom</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pflash</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='readonly'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>yes</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>no</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='secure'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>no</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </loader>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </os>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <cpu>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='host-passthrough' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='hostPassthroughMigratable'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>on</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>off</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='maximum' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='maximumMigratable'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>on</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>off</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='host-model' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <vendor>AMD</vendor>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='x2apic'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='tsc-deadline'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='hypervisor'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='tsc_adjust'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='spec-ctrl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='stibp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='ssbd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='cmp_legacy'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='overflow-recov'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='succor'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='ibrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='amd-ssbd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='virt-ssbd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='lbrv'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='tsc-scale'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='vmcb-clean'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='flushbyasid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='pause-filter'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='pfthreshold'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='svme-addr-chk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='disable' name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='custom' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cooperlake'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cooperlake-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cooperlake-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Dhyana-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Genoa'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amd-psfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='auto-ibrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='stibp-always-on'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Genoa-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amd-psfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='auto-ibrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='stibp-always-on'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Milan'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Milan-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Milan-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amd-psfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='stibp-always-on'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='GraniteRapids'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='prefetchiti'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='GraniteRapids-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='prefetchiti'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='GraniteRapids-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10-128'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10-256'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10-512'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='prefetchiti'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v6'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v7'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='KnightsMill'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4fmaps'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4vnniw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512er'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512pf'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='KnightsMill-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4fmaps'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4vnniw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512er'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512pf'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G4-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tbm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G5-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tbm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SierraForest'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ne-convert'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cmpccxadd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SierraForest-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ne-convert'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cmpccxadd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='athlon'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='athlon-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='core2duo'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='core2duo-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='coreduo'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='coreduo-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='n270'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='n270-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='phenom'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='phenom-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </cpu>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <memoryBacking supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <enum name='sourceType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>file</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>anonymous</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>memfd</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </memoryBacking>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <devices>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <disk supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='diskDevice'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>disk</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>cdrom</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>floppy</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>lun</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='bus'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>ide</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>fdc</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>scsi</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>usb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>sata</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-non-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </disk>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <graphics supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vnc</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>egl-headless</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>dbus</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </graphics>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <video supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='modelType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vga</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>cirrus</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>none</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>bochs</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>ramfb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </video>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <hostdev supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='mode'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>subsystem</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='startupPolicy'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>default</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>mandatory</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>requisite</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>optional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='subsysType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>usb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pci</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>scsi</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='capsType'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='pciBackend'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </hostdev>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <rng supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-non-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendModel'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>random</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>egd</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>builtin</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </rng>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <filesystem supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='driverType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>path</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>handle</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtiofs</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </filesystem>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <tpm supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tpm-tis</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tpm-crb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendModel'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>emulator</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>external</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendVersion'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>2.0</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </tpm>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <redirdev supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='bus'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>usb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </redirdev>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <channel supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pty</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>unix</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </channel>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <crypto supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>qemu</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendModel'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>builtin</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </crypto>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <interface supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>default</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>passt</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </interface>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <panic supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>isa</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>hyperv</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </panic>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <console supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>null</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vc</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pty</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>dev</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>file</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pipe</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>stdio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>udp</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tcp</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>unix</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>qemu-vdagent</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>dbus</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </console>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </devices>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <features>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <gic supported='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <vmcoreinfo supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <genid supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <backingStoreInput supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <backup supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <async-teardown supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <ps2 supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <sev supported='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <sgx supported='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <hyperv supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='features'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>relaxed</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vapic</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>spinlocks</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vpindex</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>runtime</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>synic</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>stimer</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>reset</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vendor_id</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>frequencies</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>reenlightenment</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tlbflush</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>ipi</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>avic</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>emsr_bitmap</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>xmm_input</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <defaults>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <spinlocks>4095</spinlocks>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <stimer_direct>on</stimer_direct>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <tlbflush_direct>on</tlbflush_direct>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <tlbflush_extended>on</tlbflush_extended>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </defaults>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </hyperv>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <launchSecurity supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='sectype'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tdx</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </launchSecurity>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </features>
Dec  6 05:00:31 np0005548915 nova_compute[253834]: </domainCapabilities>
Dec  6 05:00:31 np0005548915 nova_compute[253834]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.512 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  6 05:00:31 np0005548915 nova_compute[253834]: <domainCapabilities>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <path>/usr/libexec/qemu-kvm</path>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <domain>kvm</domain>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <arch>i686</arch>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <vcpu max='4096'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <iothreads supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <os supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <enum name='firmware'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <loader supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>rom</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pflash</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='readonly'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>yes</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>no</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='secure'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>no</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </loader>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </os>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <cpu>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='host-passthrough' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='hostPassthroughMigratable'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>on</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>off</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='maximum' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='maximumMigratable'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>on</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>off</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='host-model' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <vendor>AMD</vendor>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='x2apic'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='tsc-deadline'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='hypervisor'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='tsc_adjust'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='spec-ctrl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='stibp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='ssbd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='cmp_legacy'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='overflow-recov'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='succor'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='ibrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='amd-ssbd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='virt-ssbd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='lbrv'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='tsc-scale'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='vmcb-clean'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='flushbyasid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='pause-filter'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='pfthreshold'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='svme-addr-chk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='disable' name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='custom' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cooperlake'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cooperlake-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cooperlake-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Dhyana-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Genoa'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amd-psfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='auto-ibrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='stibp-always-on'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Genoa-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amd-psfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='auto-ibrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='stibp-always-on'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Milan'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Milan-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Milan-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amd-psfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='stibp-always-on'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='GraniteRapids'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='prefetchiti'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='GraniteRapids-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='prefetchiti'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='GraniteRapids-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10-128'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10-256'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10-512'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='prefetchiti'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v6'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v7'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='KnightsMill'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4fmaps'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4vnniw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512er'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512pf'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='KnightsMill-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4fmaps'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4vnniw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512er'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512pf'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G4-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tbm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G5-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tbm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SierraForest'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ne-convert'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cmpccxadd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SierraForest-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ne-convert'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cmpccxadd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='athlon'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='athlon-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='core2duo'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='core2duo-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='coreduo'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='coreduo-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='n270'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='n270-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='phenom'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='phenom-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </cpu>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <memoryBacking supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <enum name='sourceType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>file</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>anonymous</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>memfd</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </memoryBacking>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <devices>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <disk supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='diskDevice'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>disk</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>cdrom</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>floppy</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>lun</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='bus'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>fdc</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>scsi</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>usb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>sata</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-non-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </disk>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <graphics supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vnc</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>egl-headless</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>dbus</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </graphics>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <video supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='modelType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vga</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>cirrus</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>none</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>bochs</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>ramfb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </video>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <hostdev supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='mode'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>subsystem</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='startupPolicy'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>default</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>mandatory</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>requisite</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>optional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='subsysType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>usb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pci</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>scsi</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='capsType'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='pciBackend'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </hostdev>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <rng supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-non-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendModel'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>random</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>egd</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>builtin</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </rng>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <filesystem supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='driverType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>path</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>handle</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtiofs</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </filesystem>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <tpm supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tpm-tis</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tpm-crb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendModel'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>emulator</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>external</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendVersion'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>2.0</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </tpm>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <redirdev supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='bus'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>usb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </redirdev>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <channel supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pty</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>unix</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </channel>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <crypto supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>qemu</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendModel'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>builtin</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </crypto>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <interface supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>default</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>passt</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </interface>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <panic supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>isa</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>hyperv</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </panic>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <console supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>null</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vc</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pty</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>dev</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>file</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pipe</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>stdio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>udp</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tcp</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>unix</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>qemu-vdagent</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>dbus</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </console>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </devices>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <features>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <gic supported='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <vmcoreinfo supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <genid supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <backingStoreInput supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <backup supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <async-teardown supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <ps2 supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <sev supported='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <sgx supported='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <hyperv supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='features'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>relaxed</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vapic</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>spinlocks</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vpindex</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>runtime</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>synic</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>stimer</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>reset</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vendor_id</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>frequencies</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>reenlightenment</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tlbflush</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>ipi</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>avic</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>emsr_bitmap</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>xmm_input</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <defaults>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <spinlocks>4095</spinlocks>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <stimer_direct>on</stimer_direct>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <tlbflush_direct>on</tlbflush_direct>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <tlbflush_extended>on</tlbflush_extended>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </defaults>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </hyperv>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <launchSecurity supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='sectype'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tdx</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </launchSecurity>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </features>
Dec  6 05:00:31 np0005548915 nova_compute[253834]: </domainCapabilities>
Dec  6 05:00:31 np0005548915 nova_compute[253834]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.545 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.549 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  6 05:00:31 np0005548915 nova_compute[253834]: <domainCapabilities>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <path>/usr/libexec/qemu-kvm</path>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <domain>kvm</domain>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <arch>x86_64</arch>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <vcpu max='240'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <iothreads supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <os supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <enum name='firmware'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <loader supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>rom</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pflash</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='readonly'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>yes</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>no</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='secure'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>no</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </loader>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </os>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <cpu>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='host-passthrough' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='hostPassthroughMigratable'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>on</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>off</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='maximum' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='maximumMigratable'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>on</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>off</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='host-model' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <vendor>AMD</vendor>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='x2apic'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='tsc-deadline'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='hypervisor'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='tsc_adjust'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='spec-ctrl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='stibp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='ssbd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='cmp_legacy'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='overflow-recov'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='succor'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='ibrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='amd-ssbd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='virt-ssbd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='lbrv'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='tsc-scale'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='vmcb-clean'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='flushbyasid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='pause-filter'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='pfthreshold'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='svme-addr-chk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='disable' name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='custom' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cooperlake'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cooperlake-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cooperlake-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Dhyana-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Genoa'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amd-psfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='auto-ibrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='stibp-always-on'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Genoa-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amd-psfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='auto-ibrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='stibp-always-on'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Milan'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Milan-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Milan-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amd-psfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='stibp-always-on'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='GraniteRapids'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='prefetchiti'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='GraniteRapids-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='prefetchiti'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='GraniteRapids-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10-128'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10-256'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10-512'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='prefetchiti'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v6'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v7'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='KnightsMill'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4fmaps'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4vnniw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512er'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512pf'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='KnightsMill-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4fmaps'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4vnniw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512er'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512pf'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G4-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tbm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G5-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tbm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SierraForest'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ne-convert'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cmpccxadd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SierraForest-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ne-convert'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cmpccxadd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='athlon'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='athlon-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='core2duo'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='core2duo-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='coreduo'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='coreduo-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='n270'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='n270-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='phenom'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='phenom-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </cpu>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <memoryBacking supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <enum name='sourceType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>file</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>anonymous</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>memfd</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </memoryBacking>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <devices>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <disk supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='diskDevice'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>disk</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>cdrom</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>floppy</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>lun</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='bus'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>ide</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>fdc</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>scsi</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>usb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>sata</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-non-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </disk>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <graphics supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vnc</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>egl-headless</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>dbus</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </graphics>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <video supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='modelType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vga</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>cirrus</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>none</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>bochs</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>ramfb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </video>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <hostdev supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='mode'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>subsystem</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='startupPolicy'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>default</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>mandatory</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>requisite</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>optional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='subsysType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>usb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pci</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>scsi</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='capsType'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='pciBackend'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </hostdev>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <rng supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-non-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendModel'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>random</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>egd</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>builtin</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </rng>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <filesystem supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='driverType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>path</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>handle</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtiofs</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </filesystem>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <tpm supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tpm-tis</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tpm-crb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendModel'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>emulator</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>external</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendVersion'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>2.0</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </tpm>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <redirdev supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='bus'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>usb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </redirdev>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <channel supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pty</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>unix</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </channel>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <crypto supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>qemu</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendModel'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>builtin</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </crypto>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <interface supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>default</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>passt</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </interface>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <panic supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>isa</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>hyperv</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </panic>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <console supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>null</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vc</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pty</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>dev</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>file</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pipe</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>stdio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>udp</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tcp</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>unix</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>qemu-vdagent</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>dbus</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </console>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </devices>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <features>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <gic supported='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <vmcoreinfo supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <genid supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <backingStoreInput supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <backup supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <async-teardown supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <ps2 supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <sev supported='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <sgx supported='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <hyperv supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='features'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>relaxed</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vapic</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>spinlocks</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vpindex</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>runtime</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>synic</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>stimer</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>reset</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vendor_id</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>frequencies</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>reenlightenment</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tlbflush</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>ipi</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>avic</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>emsr_bitmap</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>xmm_input</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <defaults>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <spinlocks>4095</spinlocks>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <stimer_direct>on</stimer_direct>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <tlbflush_direct>on</tlbflush_direct>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <tlbflush_extended>on</tlbflush_extended>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </defaults>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </hyperv>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <launchSecurity supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='sectype'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tdx</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </launchSecurity>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </features>
Dec  6 05:00:31 np0005548915 nova_compute[253834]: </domainCapabilities>
Dec  6 05:00:31 np0005548915 nova_compute[253834]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.611 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  6 05:00:31 np0005548915 nova_compute[253834]: <domainCapabilities>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <path>/usr/libexec/qemu-kvm</path>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <domain>kvm</domain>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <arch>x86_64</arch>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <vcpu max='4096'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <iothreads supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <os supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <enum name='firmware'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>efi</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <loader supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>rom</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pflash</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='readonly'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>yes</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>no</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='secure'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>yes</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>no</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </loader>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </os>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <cpu>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='host-passthrough' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='hostPassthroughMigratable'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>on</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>off</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='maximum' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='maximumMigratable'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>on</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>off</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='host-model' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <vendor>AMD</vendor>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='x2apic'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='tsc-deadline'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='hypervisor'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='tsc_adjust'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='spec-ctrl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='stibp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='ssbd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='cmp_legacy'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='overflow-recov'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='succor'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='ibrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='amd-ssbd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='virt-ssbd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='lbrv'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='tsc-scale'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='vmcb-clean'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='flushbyasid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='pause-filter'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='pfthreshold'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='svme-addr-chk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <feature policy='disable' name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <mode name='custom' supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Broadwell-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cascadelake-Server-v5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cooperlake'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cooperlake-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Cooperlake-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Denverton-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Dhyana-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Genoa'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amd-psfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='auto-ibrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='stibp-always-on'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Genoa-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amd-psfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='auto-ibrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='stibp-always-on'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Milan'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Milan-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Milan-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amd-psfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='stibp-always-on'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-Rome-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='EPYC-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='GraniteRapids'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='prefetchiti'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='GraniteRapids-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='prefetchiti'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='GraniteRapids-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10-128'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10-256'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx10-512'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='prefetchiti'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Haswell-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-noTSX'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v6'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Icelake-Server-v7'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='IvyBridge-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='KnightsMill'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4fmaps'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4vnniw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512er'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512pf'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='KnightsMill-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4fmaps'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-4vnniw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512er'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512pf'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G4-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tbm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Opteron_G5-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fma4'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tbm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xop'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SapphireRapids-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='amx-tile'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-bf16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-fp16'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bitalg'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrc'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fzrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='la57'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='taa-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xfd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SierraForest'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ne-convert'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cmpccxadd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='SierraForest-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ifma'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-ne-convert'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx-vnni-int8'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cmpccxadd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fbsdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='fsrs'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ibrs-all'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mcdt-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pbrsb-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='psdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='serialize'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vaes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Client-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='hle'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='rtm'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Skylake-Server-v5'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512bw'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512cd'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512dq'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512f'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='avx512vl'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='invpcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pcid'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='pku'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='mpx'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v2'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v3'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='core-capability'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='split-lock-detect'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='Snowridge-v4'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='cldemote'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='erms'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='gfni'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdir64b'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='movdiri'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='xsaves'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='athlon'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='athlon-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='core2duo'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='core2duo-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='coreduo'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='coreduo-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='n270'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='n270-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='ss'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='phenom'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <blockers model='phenom-v1'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnow'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <feature name='3dnowext'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </blockers>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </mode>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </cpu>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <memoryBacking supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <enum name='sourceType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>file</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>anonymous</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <value>memfd</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </memoryBacking>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <devices>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <disk supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='diskDevice'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>disk</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>cdrom</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>floppy</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>lun</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='bus'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>fdc</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>scsi</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>usb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>sata</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-non-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </disk>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <graphics supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vnc</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>egl-headless</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>dbus</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </graphics>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <video supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='modelType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vga</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>cirrus</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>none</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>bochs</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>ramfb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </video>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <hostdev supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='mode'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>subsystem</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='startupPolicy'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>default</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>mandatory</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>requisite</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>optional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='subsysType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>usb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pci</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>scsi</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='capsType'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='pciBackend'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </hostdev>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <rng supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtio-non-transitional</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendModel'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>random</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>egd</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>builtin</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </rng>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <filesystem supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='driverType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>path</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>handle</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>virtiofs</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </filesystem>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <tpm supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tpm-tis</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tpm-crb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendModel'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>emulator</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>external</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendVersion'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>2.0</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </tpm>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <redirdev supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='bus'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>usb</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </redirdev>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <channel supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pty</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>unix</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </channel>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <crypto supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>qemu</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendModel'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>builtin</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </crypto>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <interface supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='backendType'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>default</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>passt</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </interface>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <panic supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='model'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>isa</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>hyperv</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </panic>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <console supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='type'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>null</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vc</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pty</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>dev</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>file</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>pipe</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>stdio</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>udp</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tcp</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>unix</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>qemu-vdagent</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>dbus</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </console>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </devices>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  <features>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <gic supported='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <vmcoreinfo supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <genid supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <backingStoreInput supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <backup supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <async-teardown supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <ps2 supported='yes'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <sev supported='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <sgx supported='no'/>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <hyperv supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='features'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>relaxed</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vapic</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>spinlocks</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vpindex</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>runtime</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>synic</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>stimer</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>reset</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>vendor_id</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>frequencies</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>reenlightenment</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tlbflush</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>ipi</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>avic</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>emsr_bitmap</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>xmm_input</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <defaults>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <spinlocks>4095</spinlocks>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <stimer_direct>on</stimer_direct>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <tlbflush_direct>on</tlbflush_direct>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <tlbflush_extended>on</tlbflush_extended>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </defaults>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </hyperv>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    <launchSecurity supported='yes'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      <enum name='sectype'>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:        <value>tdx</value>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:      </enum>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:    </launchSecurity>
Dec  6 05:00:31 np0005548915 nova_compute[253834]:  </features>
Dec  6 05:00:31 np0005548915 nova_compute[253834]: </domainCapabilities>
Dec  6 05:00:31 np0005548915 nova_compute[253834]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.677 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.677 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.677 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.678 253838 INFO nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Secure Boot support detected#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.679 253838 INFO nova.virt.libvirt.driver [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.680 253838 INFO nova.virt.libvirt.driver [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.688 253838 DEBUG nova.virt.libvirt.driver [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.719 253838 INFO nova.virt.node [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Determined node identity 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 from /var/lib/nova/compute_id#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.745 253838 WARNING nova.compute.manager [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Compute nodes ['06a9c7d1-c74c-47ea-9e97-16acfab6aa88'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.787 253838 INFO nova.compute.manager [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.826 253838 WARNING nova.compute.manager [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.827 253838 DEBUG oslo_concurrency.lockutils [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.827 253838 DEBUG oslo_concurrency.lockutils [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.828 253838 DEBUG oslo_concurrency.lockutils [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.828 253838 DEBUG nova.compute.resource_tracker [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:00:31 np0005548915 nova_compute[253834]: 2025-12-06 10:00:31.829 253838 DEBUG oslo_concurrency.processutils [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:00:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:31 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:32 np0005548915 python3.9[254739]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 05:00:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:32 np0005548915 systemd[1]: Stopping nova_compute container...
Dec  6 05:00:32 np0005548915 nova_compute[253834]: 2025-12-06 10:00:32.159 253838 DEBUG oslo_concurrency.lockutils [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:00:32 np0005548915 nova_compute[253834]: 2025-12-06 10:00:32.160 253838 DEBUG oslo_concurrency.lockutils [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:00:32 np0005548915 nova_compute[253834]: 2025-12-06 10:00:32.160 253838 DEBUG oslo_concurrency.lockutils [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:00:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:00:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:32.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:32.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:00:32 np0005548915 virtqemud[254445]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  6 05:00:32 np0005548915 virtqemud[254445]: hostname: compute-0
Dec  6 05:00:32 np0005548915 virtqemud[254445]: End of file while reading data: Input/output error
Dec  6 05:00:32 np0005548915 systemd[1]: libpod-61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4.scope: Deactivated successfully.
Dec  6 05:00:32 np0005548915 systemd[1]: libpod-61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4.scope: Consumed 3.784s CPU time.
Dec  6 05:00:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:32 np0005548915 podman[254763]: 2025-12-06 10:00:32.705391524 +0000 UTC m=+0.601242430 container died 61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3)
Dec  6 05:00:32 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1-merged.mount: Deactivated successfully.
Dec  6 05:00:32 np0005548915 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4-userdata-shm.mount: Deactivated successfully.
Dec  6 05:00:33 np0005548915 podman[254763]: 2025-12-06 10:00:33.197688868 +0000 UTC m=+1.093539774 container cleanup 61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  6 05:00:33 np0005548915 podman[254763]: nova_compute
Dec  6 05:00:33 np0005548915 podman[254791]: nova_compute
Dec  6 05:00:33 np0005548915 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec  6 05:00:33 np0005548915 systemd[1]: Stopped nova_compute container.
Dec  6 05:00:33 np0005548915 systemd[1]: Starting nova_compute container...
Dec  6 05:00:33 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:00:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:33 np0005548915 podman[254804]: 2025-12-06 10:00:33.408112329 +0000 UTC m=+0.107424395 container init 61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute)
Dec  6 05:00:33 np0005548915 podman[254804]: 2025-12-06 10:00:33.417491141 +0000 UTC m=+0.116803187 container start 61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:00:33 np0005548915 nova_compute[254819]: + sudo -E kolla_set_configs
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Validating config file
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Copying service configuration files
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Deleting /etc/ceph
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Creating directory /etc/ceph
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /etc/ceph
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Writing out command to execute
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  6 05:00:33 np0005548915 nova_compute[254819]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  6 05:00:33 np0005548915 nova_compute[254819]: ++ cat /run_command
Dec  6 05:00:33 np0005548915 nova_compute[254819]: + CMD=nova-compute
Dec  6 05:00:33 np0005548915 nova_compute[254819]: + ARGS=
Dec  6 05:00:33 np0005548915 nova_compute[254819]: + sudo kolla_copy_cacerts
Dec  6 05:00:33 np0005548915 nova_compute[254819]: + [[ ! -n '' ]]
Dec  6 05:00:33 np0005548915 nova_compute[254819]: + . kolla_extend_start
Dec  6 05:00:33 np0005548915 nova_compute[254819]: + echo 'Running command: '\''nova-compute'\'''
Dec  6 05:00:33 np0005548915 nova_compute[254819]: Running command: 'nova-compute'
Dec  6 05:00:33 np0005548915 nova_compute[254819]: + umask 0022
Dec  6 05:00:33 np0005548915 nova_compute[254819]: + exec nova-compute
Dec  6 05:00:33 np0005548915 podman[254804]: nova_compute
Dec  6 05:00:33 np0005548915 systemd[1]: Started nova_compute container.
Dec  6 05:00:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 05:00:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:34.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:00:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:34.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:00:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:35 np0005548915 python3.9[254984]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  6 05:00:35 np0005548915 systemd[1]: Started libpod-conmon-60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93.scope.
Dec  6 05:00:35 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:00:35 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf38a67268bb5c778ee22b82a67e967500166ebf66af340febcfb15bfceb4b28/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:35 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf38a67268bb5c778ee22b82a67e967500166ebf66af340febcfb15bfceb4b28/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:35 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf38a67268bb5c778ee22b82a67e967500166ebf66af340febcfb15bfceb4b28/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  6 05:00:35 np0005548915 podman[255010]: 2025-12-06 10:00:35.271307298 +0000 UTC m=+0.123952367 container init 60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 05:00:35 np0005548915 podman[255010]: 2025-12-06 10:00:35.279774395 +0000 UTC m=+0.132419454 container start 60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, io.buildah.version=1.41.3)
Dec  6 05:00:35 np0005548915 python3.9[254984]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec  6 05:00:35 np0005548915 nova_compute_init[255032]: INFO:nova_statedir:Applying nova statedir ownership
Dec  6 05:00:35 np0005548915 nova_compute_init[255032]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec  6 05:00:35 np0005548915 nova_compute_init[255032]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec  6 05:00:35 np0005548915 nova_compute_init[255032]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec  6 05:00:35 np0005548915 nova_compute_init[255032]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec  6 05:00:35 np0005548915 nova_compute_init[255032]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec  6 05:00:35 np0005548915 nova_compute_init[255032]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec  6 05:00:35 np0005548915 nova_compute_init[255032]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec  6 05:00:35 np0005548915 nova_compute_init[255032]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec  6 05:00:35 np0005548915 nova_compute_init[255032]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec  6 05:00:35 np0005548915 nova_compute_init[255032]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec  6 05:00:35 np0005548915 nova_compute_init[255032]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec  6 05:00:35 np0005548915 nova_compute_init[255032]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec  6 05:00:35 np0005548915 nova_compute_init[255032]: INFO:nova_statedir:Nova statedir ownership complete
Dec  6 05:00:35 np0005548915 systemd[1]: libpod-60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93.scope: Deactivated successfully.
Dec  6 05:00:35 np0005548915 podman[255046]: 2025-12-06 10:00:35.378545209 +0000 UTC m=+0.029824620 container died 60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 05:00:35 np0005548915 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93-userdata-shm.mount: Deactivated successfully.
Dec  6 05:00:35 np0005548915 systemd[1]: var-lib-containers-storage-overlay-bf38a67268bb5c778ee22b82a67e967500166ebf66af340febcfb15bfceb4b28-merged.mount: Deactivated successfully.
Dec  6 05:00:35 np0005548915 podman[255046]: 2025-12-06 10:00:35.41599977 +0000 UTC m=+0.067279171 container cleanup 60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec  6 05:00:35 np0005548915 systemd[1]: libpod-conmon-60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93.scope: Deactivated successfully.
Dec  6 05:00:35 np0005548915 nova_compute[254819]: 2025-12-06 10:00:35.542 254824 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  6 05:00:35 np0005548915 nova_compute[254819]: 2025-12-06 10:00:35.543 254824 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  6 05:00:35 np0005548915 nova_compute[254819]: 2025-12-06 10:00:35.544 254824 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  6 05:00:35 np0005548915 nova_compute[254819]: 2025-12-06 10:00:35.544 254824 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  6 05:00:35 np0005548915 nova_compute[254819]: 2025-12-06 10:00:35.719 254824 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:00:35 np0005548915 nova_compute[254819]: 2025-12-06 10:00:35.735 254824 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:00:35 np0005548915 nova_compute[254819]: 2025-12-06 10:00:35.736 254824 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  6 05:00:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:00:36 np0005548915 systemd[1]: session-54.scope: Deactivated successfully.
Dec  6 05:00:36 np0005548915 systemd[1]: session-54.scope: Consumed 2min 34.573s CPU time.
Dec  6 05:00:36 np0005548915 systemd-logind[795]: Session 54 logged out. Waiting for processes to exit.
Dec  6 05:00:36 np0005548915 systemd-logind[795]: Removed session 54.
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.193 254824 INFO nova.virt.driver [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  6 05:00:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.307 254824 INFO nova.compute.provider_config [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.316 254824 DEBUG oslo_concurrency.lockutils [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.316 254824 DEBUG oslo_concurrency.lockutils [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.316 254824 DEBUG oslo_concurrency.lockutils [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.317 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.317 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.317 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.317 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.317 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.317 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.317 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.318 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.318 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.318 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.318 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.318 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.319 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.319 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.319 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.319 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.319 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.320 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.320 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.320 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.320 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.320 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.320 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.320 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.321 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.321 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.321 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.321 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.321 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.321 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.322 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.322 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.322 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.322 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.322 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.322 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.323 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.323 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.323 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.323 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.323 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.323 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.324 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.324 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.324 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.324 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.325 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.325 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.325 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.325 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.325 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.326 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.326 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.326 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.326 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.327 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.327 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.327 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.327 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.327 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.328 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.328 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.328 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.328 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.328 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.328 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.329 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.329 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.329 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.329 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.329 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.329 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.330 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.330 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.330 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.330 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.330 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.331 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.331 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.331 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.331 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.331 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.332 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.332 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.332 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.332 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.332 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.332 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.332 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.333 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.333 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.333 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.333 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.333 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.333 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.333 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.334 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.334 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.334 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.334 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.334 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.334 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.334 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.335 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.335 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.335 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.335 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.335 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.335 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.336 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.336 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.336 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.336 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.336 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.336 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.337 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.337 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.337 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.337 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.337 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.337 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.337 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.338 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.338 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.338 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.338 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.338 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.338 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.338 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.339 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.339 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.339 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.339 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.339 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.339 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.339 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.340 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.340 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.340 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.340 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.340 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.340 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.340 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.341 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.341 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.341 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.341 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.341 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.341 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.342 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.342 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.342 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.342 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.342 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.343 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.343 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.343 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.343 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.343 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.343 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.344 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.344 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.344 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.344 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.344 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.344 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.344 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.345 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.345 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.345 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.345 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.345 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.345 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.345 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.346 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.346 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.346 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.346 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.346 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.346 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.346 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.347 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.347 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.347 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.347 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.347 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.348 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.348 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.348 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.348 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.348 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.348 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.349 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.349 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.349 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.349 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.349 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.349 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.349 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.350 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.350 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.350 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.350 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.350 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.350 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.351 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.351 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.351 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.351 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.351 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.352 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.352 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.352 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.352 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.352 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.352 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.352 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.353 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.353 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.353 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.353 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.353 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.353 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.353 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.354 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.354 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.354 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.354 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.354 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.354 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.354 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.355 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.355 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.355 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.355 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.355 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.356 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.356 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.356 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.356 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.356 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.356 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.357 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.357 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.357 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.357 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.357 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.357 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.358 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.358 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.358 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.358 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.358 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.358 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.359 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.359 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.359 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.359 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.359 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.359 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.360 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.360 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.360 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.360 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.360 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.360 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.360 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.361 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.361 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.361 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.361 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.361 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.361 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.362 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.362 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.362 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.362 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.362 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.362 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.362 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.363 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.363 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.363 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.363 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.363 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.363 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.364 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.364 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.364 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.364 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.364 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.364 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.364 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.365 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.365 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.365 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.365 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.365 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.365 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.365 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.366 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.366 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.366 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.366 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.367 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.367 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.367 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.367 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.367 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.368 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.368 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.368 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.368 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.368 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.368 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.368 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.369 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.369 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.369 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.369 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.369 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.369 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.369 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.370 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.370 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.370 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.370 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.370 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.370 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.371 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.371 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.371 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.371 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.371 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.371 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.371 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.372 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.372 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.372 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.372 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.372 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.373 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.373 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.373 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.373 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.373 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.373 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.374 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.374 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.374 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.374 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.374 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.374 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.375 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.375 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.375 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.375 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.375 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.376 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.376 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.376 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.376 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.376 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.376 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.377 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.377 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.377 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.377 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.377 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.377 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.377 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.378 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.378 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.378 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.378 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.378 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.378 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.379 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.379 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.379 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.379 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.379 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.379 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.380 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.380 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.380 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.380 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.380 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.380 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.381 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.381 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.381 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.381 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.381 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.381 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.381 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.382 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.382 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.382 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.382 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.382 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.383 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.383 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.383 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.383 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.383 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.383 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.384 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.384 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.384 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.384 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.384 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.384 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.385 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.385 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.385 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.385 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.385 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.385 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.385 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.386 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.386 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.386 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.386 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.386 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.386 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.386 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.387 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.387 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.387 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.387 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.387 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.387 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.387 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.388 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.388 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.388 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.388 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.388 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.388 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.388 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.389 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.389 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.389 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.389 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.389 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.389 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.389 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.390 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.390 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.390 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.390 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.390 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.390 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.391 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.391 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.391 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.391 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.391 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.392 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.392 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.392 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.392 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.392 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.392 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.392 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.393 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.393 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.393 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.393 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.393 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.393 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.393 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.394 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.394 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.394 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.394 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.394 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.394 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.394 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.395 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.395 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.395 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.395 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.395 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.395 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.395 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.396 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.396 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.396 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.396 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.396 254824 WARNING oslo_config.cfg [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  6 05:00:36 np0005548915 nova_compute[254819]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  6 05:00:36 np0005548915 nova_compute[254819]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  6 05:00:36 np0005548915 nova_compute[254819]: and ``live_migration_inbound_addr`` respectively.
Dec  6 05:00:36 np0005548915 nova_compute[254819]: ).  Its value may be silently ignored in the future.#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.397 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.397 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.397 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.397 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.397 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.398 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.398 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.398 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.398 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.398 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.399 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.399 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.399 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.399 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.399 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.400 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.400 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.400 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.400 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rbd_secret_uuid        = 5ecd3f74-dade-5fc4-92ce-8950ae424258 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.400 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.400 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.400 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.401 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.401 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.401 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.401 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.401 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.401 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.401 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.402 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.402 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.402 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.402 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.402 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.402 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.403 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.403 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.403 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.403 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.403 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.403 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.403 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.404 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.404 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.404 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.404 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.404 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.404 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.405 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.405 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.405 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.405 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.405 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.405 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.406 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.406 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.406 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.406 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.406 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.406 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.406 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.407 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.407 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.407 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.407 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.407 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.408 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.408 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.408 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.408 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.408 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.408 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.409 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.409 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.409 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.409 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.409 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.409 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.410 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.410 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.410 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.410 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.410 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.411 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.411 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.411 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.411 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.411 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.412 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.412 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.412 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.412 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.412 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.412 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.413 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.413 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.413 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.413 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.413 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.414 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.414 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.414 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.414 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.414 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.414 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.414 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.415 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.415 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.415 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.415 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.415 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.416 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.416 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.416 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.416 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.416 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.416 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.417 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.417 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.417 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.417 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.417 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.418 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.418 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.418 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.418 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.418 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.419 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.419 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.419 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.419 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.419 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.420 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.420 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.420 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.420 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.420 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.421 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.421 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.421 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.421 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.422 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.422 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.422 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.422 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.422 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.423 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.423 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.423 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.423 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.423 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.424 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.424 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.424 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.424 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.424 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.425 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.425 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.425 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.425 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.425 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.426 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.426 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.426 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.426 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.426 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.427 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.427 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.427 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.427 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.427 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.428 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.428 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.428 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.428 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.428 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.429 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.429 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.429 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.429 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.429 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.430 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.430 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.430 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.430 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.430 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.430 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.431 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.431 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.431 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.431 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.431 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.431 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.432 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.432 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.432 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.432 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.432 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.432 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.433 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.433 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.433 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.433 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.433 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.433 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.434 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.434 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.434 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.434 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.434 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.434 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.435 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.435 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.435 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.435 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.435 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.435 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.436 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.436 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.436 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.436 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.436 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.436 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.436 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.438 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.438 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.438 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.438 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.438 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.438 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.438 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.439 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.439 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.439 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.439 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.439 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.439 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.439 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.440 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.440 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.440 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.440 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.440 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.440 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.441 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.441 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.441 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.441 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.441 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.441 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.441 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.443 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.443 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.443 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.443 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.443 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.443 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.443 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.444 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.444 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.444 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.444 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.444 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.444 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.444 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.445 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.445 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.445 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.445 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.445 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.445 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.445 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.446 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.446 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.446 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.446 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.446 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.446 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.446 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.447 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.447 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.447 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.447 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.447 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.447 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.448 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.448 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.448 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.448 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.448 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.448 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.448 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.450 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.450 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.450 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.450 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.450 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.450 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.450 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.451 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.451 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.451 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.451 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.451 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.451 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.451 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.452 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.452 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.452 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.452 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.452 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.452 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.452 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.453 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.453 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.453 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.453 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.453 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.453 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.453 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.454 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.454 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.454 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.454 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.454 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.454 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.454 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.456 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.456 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.456 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.456 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.456 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.456 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.456 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.457 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.457 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.457 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.457 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.458 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.458 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.458 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.458 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.458 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.458 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.458 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.459 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.459 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.459 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.459 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.459 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.459 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.459 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.460 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.460 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.460 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.460 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.460 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.460 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.460 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.461 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.461 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.461 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.461 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.461 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.461 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.461 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.462 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.462 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.462 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:36.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.462 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.462 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.462 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.462 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.463 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.463 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.463 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.463 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.463 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.463 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.463 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.464 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.464 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.464 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.464 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.464 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.464 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.465 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.465 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.465 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.465 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.465 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.465 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.466 254824 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.486 254824 INFO nova.virt.node [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Determined node identity 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 from /var/lib/nova/compute_id#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.487 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.488 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.488 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.488 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.503 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f223c536760> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.505 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f223c536760> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.506 254824 INFO nova.virt.libvirt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.513 254824 INFO nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Libvirt host capabilities <capabilities>
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <host>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <uuid>cc5c2b35-ce1b-4acf-9906-7bdc7897f14e</uuid>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <cpu>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <arch>x86_64</arch>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model>EPYC-Rome-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <vendor>AMD</vendor>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <microcode version='16777317'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <signature family='23' model='49' stepping='0'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='x2apic'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='tsc-deadline'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='osxsave'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='hypervisor'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='tsc_adjust'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='spec-ctrl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='stibp'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='arch-capabilities'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='ssbd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='cmp_legacy'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='topoext'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='virt-ssbd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='lbrv'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='tsc-scale'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='vmcb-clean'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='pause-filter'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='pfthreshold'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='svme-addr-chk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='rdctl-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='skip-l1dfl-vmentry'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='mds-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature name='pschange-mc-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <pages unit='KiB' size='4'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <pages unit='KiB' size='2048'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <pages unit='KiB' size='1048576'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </cpu>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <power_management>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <suspend_mem/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </power_management>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <iommu support='no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <migration_features>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <live/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <uri_transports>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <uri_transport>tcp</uri_transport>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <uri_transport>rdma</uri_transport>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </uri_transports>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </migration_features>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <topology>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <cells num='1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <cell id='0'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:          <memory unit='KiB'>7864320</memory>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:          <pages unit='KiB' size='4'>1966080</pages>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:          <pages unit='KiB' size='2048'>0</pages>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:          <distances>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:            <sibling id='0' value='10'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:          </distances>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:          <cpus num='8'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:          </cpus>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        </cell>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </cells>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </topology>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <cache>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </cache>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <secmodel>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model>selinux</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <doi>0</doi>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </secmodel>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <secmodel>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model>dac</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <doi>0</doi>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </secmodel>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  </host>
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <guest>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <os_type>hvm</os_type>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <arch name='i686'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <wordsize>32</wordsize>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <domain type='qemu'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <domain type='kvm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </arch>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <features>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <pae/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <nonpae/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <acpi default='on' toggle='yes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <apic default='on' toggle='no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <cpuselection/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <deviceboot/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <disksnapshot default='on' toggle='no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <externalSnapshot/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </features>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  </guest>
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <guest>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <os_type>hvm</os_type>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <arch name='x86_64'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <wordsize>64</wordsize>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <domain type='qemu'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <domain type='kvm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </arch>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <features>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <acpi default='on' toggle='yes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <apic default='on' toggle='no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <cpuselection/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <deviceboot/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <disksnapshot default='on' toggle='no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <externalSnapshot/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </features>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  </guest>
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 
Dec  6 05:00:36 np0005548915 nova_compute[254819]: </capabilities>
Dec  6 05:00:36 np0005548915 nova_compute[254819]: #033[00m
Dec  6 05:00:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:00:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:00:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:36.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.518 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.522 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  6 05:00:36 np0005548915 nova_compute[254819]: <domainCapabilities>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <path>/usr/libexec/qemu-kvm</path>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <domain>kvm</domain>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <arch>i686</arch>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <vcpu max='240'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <iothreads supported='yes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <os supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <enum name='firmware'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <loader supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='type'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>rom</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>pflash</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='readonly'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>yes</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>no</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='secure'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>no</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </loader>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <cpu>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <mode name='host-passthrough' supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='hostPassthroughMigratable'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>on</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>off</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </mode>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <mode name='maximum' supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='maximumMigratable'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>on</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>off</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </mode>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <mode name='host-model' supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <vendor>AMD</vendor>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='x2apic'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='tsc-deadline'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='hypervisor'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='tsc_adjust'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='spec-ctrl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='stibp'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='ssbd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='cmp_legacy'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='overflow-recov'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='succor'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='ibrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='amd-ssbd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='virt-ssbd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='lbrv'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='tsc-scale'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='vmcb-clean'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='flushbyasid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='pause-filter'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='pfthreshold'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='svme-addr-chk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='disable' name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </mode>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <mode name='custom' supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell-IBRS'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell-noTSX'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell-v4'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cascadelake-Server'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cascadelake-Server-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cascadelake-Server-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cascadelake-Server-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cascadelake-Server-v4'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cascadelake-Server-v5'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cooperlake'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cooperlake-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cooperlake-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Denverton'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='mpx'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Denverton-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='mpx'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Denverton-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Denverton-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Dhyana-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Genoa'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amd-psfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='auto-ibrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='stibp-always-on'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Genoa-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amd-psfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='auto-ibrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='stibp-always-on'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Milan'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Milan-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Milan-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amd-psfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='stibp-always-on'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Rome'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Rome-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Rome-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Rome-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-v4'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='GraniteRapids'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-tile'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fbsdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrc'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fzrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='mcdt-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pbrsb-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='prefetchiti'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='psdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='serialize'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='GraniteRapids-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-tile'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fbsdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrc'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fzrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='mcdt-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pbrsb-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='prefetchiti'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='psdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='serialize'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='GraniteRapids-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-tile'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx10'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx10-128'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx10-256'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx10-512'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='cldemote'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fbsdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrc'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fzrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='mcdt-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdir64b'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdiri'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pbrsb-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='prefetchiti'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='psdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='serialize'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ss'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell-IBRS'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell-noTSX'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell-v4'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-noTSX'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-v4'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-v5'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-v6'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-v7'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='IvyBridge'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='IvyBridge-IBRS'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='IvyBridge-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='IvyBridge-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='KnightsMill'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-4fmaps'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-4vnniw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512er'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512pf'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ss'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='KnightsMill-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-4fmaps'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-4vnniw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512er'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512pf'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ss'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Opteron_G4'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fma4'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xop'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Opteron_G4-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fma4'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xop'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Opteron_G5'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fma4'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tbm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xop'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Opteron_G5-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fma4'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tbm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xop'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='SapphireRapids'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-tile'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrc'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fzrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='serialize'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='SapphireRapids-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-tile'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrc'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fzrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='serialize'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='SapphireRapids-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-tile'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fbsdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrc'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fzrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='psdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='serialize'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='SapphireRapids-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-tile'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='cldemote'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fbsdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrc'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fzrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdir64b'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdiri'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='psdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='serialize'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ss'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='SierraForest'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-ne-convert'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='cmpccxadd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fbsdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='mcdt-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pbrsb-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='psdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='serialize'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='SierraForest-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-ne-convert'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='cmpccxadd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fbsdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='mcdt-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pbrsb-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='psdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='serialize'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Skylake-Client'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Skylake-Client-IBRS'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Skylake-Client-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Skylake-Client-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Skylake-Client-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Skylake-Client-v4'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Skylake-Server'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Skylake-Server-IBRS'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Skylake-Server-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Skylake-Server-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Skylake-Server-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Skylake-Server-v4'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Skylake-Server-v5'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Snowridge'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='cldemote'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='core-capability'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdir64b'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdiri'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='mpx'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='split-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Snowridge-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='cldemote'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='core-capability'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdir64b'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdiri'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='mpx'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='split-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Snowridge-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='cldemote'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='core-capability'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdir64b'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdiri'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='split-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Snowridge-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='cldemote'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='core-capability'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdir64b'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdiri'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='split-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Snowridge-v4'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='cldemote'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdir64b'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdiri'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='athlon'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='3dnow'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='3dnowext'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='athlon-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='3dnow'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='3dnowext'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='core2duo'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ss'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='core2duo-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ss'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='coreduo'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ss'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='coreduo-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ss'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='n270'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ss'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='n270-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ss'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='phenom'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='3dnow'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='3dnowext'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='phenom-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='3dnow'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='3dnowext'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </mode>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <memoryBacking supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <enum name='sourceType'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <value>file</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <value>anonymous</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <value>memfd</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  </memoryBacking>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <disk supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='diskDevice'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>disk</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>cdrom</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>floppy</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>lun</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='bus'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>ide</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>fdc</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>scsi</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>virtio</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>usb</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>sata</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='model'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>virtio</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>virtio-transitional</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>virtio-non-transitional</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <graphics supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='type'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>vnc</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>egl-headless</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>dbus</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </graphics>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <video supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='modelType'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>vga</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>cirrus</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>virtio</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>none</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>bochs</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>ramfb</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <hostdev supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='mode'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>subsystem</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='startupPolicy'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>default</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>mandatory</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>requisite</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>optional</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='subsysType'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>usb</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>pci</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>scsi</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='capsType'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='pciBackend'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </hostdev>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <rng supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='model'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>virtio</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>virtio-transitional</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>virtio-non-transitional</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='backendModel'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>random</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>egd</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>builtin</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <filesystem supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='driverType'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>path</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>handle</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>virtiofs</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </filesystem>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <tpm supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='model'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>tpm-tis</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>tpm-crb</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='backendModel'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>emulator</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>external</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='backendVersion'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>2.0</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </tpm>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <redirdev supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='bus'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>usb</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </redirdev>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <channel supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='type'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>pty</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>unix</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </channel>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <crypto supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='model'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='type'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>qemu</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='backendModel'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>builtin</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </crypto>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <interface supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='backendType'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>default</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>passt</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <panic supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='model'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>isa</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>hyperv</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </panic>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <console supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='type'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>null</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>vc</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>pty</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>dev</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>file</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>pipe</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>stdio</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>udp</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>tcp</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>unix</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>qemu-vdagent</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>dbus</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </console>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <gic supported='no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <vmcoreinfo supported='yes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <genid supported='yes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <backingStoreInput supported='yes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <backup supported='yes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <async-teardown supported='yes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <ps2 supported='yes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <sev supported='no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <sgx supported='no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <hyperv supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='features'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>relaxed</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>vapic</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>spinlocks</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>vpindex</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>runtime</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>synic</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>stimer</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>reset</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>vendor_id</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>frequencies</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>reenlightenment</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>tlbflush</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>ipi</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>avic</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>emsr_bitmap</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>xmm_input</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <defaults>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <spinlocks>4095</spinlocks>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <stimer_direct>on</stimer_direct>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <tlbflush_direct>on</tlbflush_direct>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <tlbflush_extended>on</tlbflush_extended>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </defaults>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </hyperv>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <launchSecurity supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='sectype'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>tdx</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </launchSecurity>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:00:36 np0005548915 nova_compute[254819]: </domainCapabilities>
Dec  6 05:00:36 np0005548915 nova_compute[254819]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.526 254824 DEBUG nova.virt.libvirt.volume.mount [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  6 05:00:36 np0005548915 nova_compute[254819]: 2025-12-06 10:00:36.529 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  6 05:00:36 np0005548915 nova_compute[254819]: <domainCapabilities>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <path>/usr/libexec/qemu-kvm</path>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <domain>kvm</domain>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <arch>i686</arch>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <vcpu max='4096'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <iothreads supported='yes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <os supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <enum name='firmware'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <loader supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='type'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>rom</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>pflash</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='readonly'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>yes</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>no</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='secure'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>no</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </loader>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:  <cpu>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <mode name='host-passthrough' supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='hostPassthroughMigratable'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>on</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>off</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </mode>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <mode name='maximum' supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <enum name='maximumMigratable'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>on</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <value>off</value>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </enum>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </mode>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <mode name='host-model' supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <vendor>AMD</vendor>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='x2apic'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='tsc-deadline'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='hypervisor'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='tsc_adjust'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='spec-ctrl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='stibp'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='ssbd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='cmp_legacy'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='overflow-recov'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='succor'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='ibrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='amd-ssbd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='virt-ssbd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='lbrv'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='tsc-scale'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='vmcb-clean'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='flushbyasid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='pause-filter'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='pfthreshold'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='svme-addr-chk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <feature policy='disable' name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    </mode>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:    <mode name='custom' supported='yes'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell-IBRS'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell-noTSX'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Broadwell-v4'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cascadelake-Server'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cascadelake-Server-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cascadelake-Server-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cascadelake-Server-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cascadelake-Server-v4'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cascadelake-Server-v5'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cooperlake'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cooperlake-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Cooperlake-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Denverton'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='mpx'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Denverton-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='mpx'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Denverton-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Denverton-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Dhyana-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Genoa'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amd-psfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='auto-ibrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='stibp-always-on'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Genoa-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amd-psfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='auto-ibrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='stibp-always-on'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Milan'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Milan-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Milan-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amd-psfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='no-nested-data-bp'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='null-sel-clr-base'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='stibp-always-on'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Rome'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Rome-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Rome-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-Rome-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='EPYC-v4'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='GraniteRapids'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-tile'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fbsdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrc'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fzrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='mcdt-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pbrsb-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='prefetchiti'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='psdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='serialize'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='GraniteRapids-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-tile'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fbsdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrc'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fzrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='mcdt-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pbrsb-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='prefetchiti'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='psdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='serialize'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='GraniteRapids-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-tile'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx10'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx10-128'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx10-256'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx10-512'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='cldemote'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fbsdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrc'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fzrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='mcdt-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdir64b'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='movdiri'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pbrsb-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='prefetchiti'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='psdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='serialize'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ss'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell-IBRS'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell-noTSX'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Haswell-v4'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-noTSX'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-v4'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-v5'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-v6'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Icelake-Server-v7'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='IvyBridge'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='IvyBridge-IBRS'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='IvyBridge-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='IvyBridge-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='KnightsMill'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-4fmaps'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-4vnniw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512er'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512pf'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ss'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='KnightsMill-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-4fmaps'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-4vnniw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512er'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512pf'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ss'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Opteron_G4'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fma4'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xop'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Opteron_G4-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fma4'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xop'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Opteron_G5'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fma4'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tbm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xop'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='Opteron_G5-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fma4'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tbm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xop'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='SapphireRapids'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-tile'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrc'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fzrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='serialize'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='SapphireRapids-v1'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-tile'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrc'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fzrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='serialize'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='SapphireRapids-v2'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-tile'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512ifma'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vbmi2'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vl'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='bus-lock-detect'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='erms'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fbsdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrc'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fsrs'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='fzrm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='gfni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='hle'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='ibrs-all'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='invpcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='la57'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pcid'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='pku'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='psdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='rtm'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='sbdr-ssdp-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='serialize'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='taa-no'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='tsx-ldtrk'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vaes'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='vpclmulqdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xfd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='xsaves'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      </blockers>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:      <blockers model='SapphireRapids-v3'>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-int8'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='amx-tile'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx-vnni'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-bf16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-fp16'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512-vpopcntdq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bitalg'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512bw'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512cd'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512dq'/>
Dec  6 05:00:36 np0005548915 nova_compute[254819]:        <feature name='avx512f'/>
Dec  6 05:01:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:30 np0005548915 rsyslogd[1004]: imjournal: 4443 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  6 05:01:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:01:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  6 05:01:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:30.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  6 05:01:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:30.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:30 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:01:30 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:01:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 05:01:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 05:01:31 np0005548915 podman[256128]: 2025-12-06 10:01:31.457551152 +0000 UTC m=+0.083681930 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 05:01:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:01:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:32.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:01:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:01:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:32.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:01:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240046e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240046e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 05:01:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:34.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:01:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:34.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:01:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:35 np0005548915 nova_compute[254819]: 2025-12-06 10:01:35.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:01:35 np0005548915 nova_compute[254819]: 2025-12-06 10:01:35.751 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:01:35 np0005548915 nova_compute[254819]: 2025-12-06 10:01:35.752 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:01:35 np0005548915 nova_compute[254819]: 2025-12-06 10:01:35.752 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:01:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100135 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:01:35 np0005548915 nova_compute[254819]: 2025-12-06 10:01:35.990 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 05:01:35 np0005548915 nova_compute[254819]: 2025-12-06 10:01:35.990 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:01:35 np0005548915 nova_compute[254819]: 2025-12-06 10:01:35.991 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:01:35 np0005548915 nova_compute[254819]: 2025-12-06 10:01:35.991 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:01:35 np0005548915 nova_compute[254819]: 2025-12-06 10:01:35.992 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:01:35 np0005548915 nova_compute[254819]: 2025-12-06 10:01:35.992 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:01:35 np0005548915 nova_compute[254819]: 2025-12-06 10:01:35.992 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:01:35 np0005548915 nova_compute[254819]: 2025-12-06 10:01:35.992 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:01:35 np0005548915 nova_compute[254819]: 2025-12-06 10:01:35.992 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:01:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:36 np0005548915 nova_compute[254819]: 2025-12-06 10:01:36.131 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:01:36 np0005548915 nova_compute[254819]: 2025-12-06 10:01:36.131 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:01:36 np0005548915 nova_compute[254819]: 2025-12-06 10:01:36.132 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:01:36 np0005548915 nova_compute[254819]: 2025-12-06 10:01:36.132 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:01:36 np0005548915 nova_compute[254819]: 2025-12-06 10:01:36.133 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:01:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:01:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:36.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:01:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2682509999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:01:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:01:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:36.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:01:36 np0005548915 nova_compute[254819]: 2025-12-06 10:01:36.593 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:01:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:36 np0005548915 nova_compute[254819]: 2025-12-06 10:01:36.755 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:01:36 np0005548915 nova_compute[254819]: 2025-12-06 10:01:36.756 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4905MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:01:36 np0005548915 nova_compute[254819]: 2025-12-06 10:01:36.757 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:01:36 np0005548915 nova_compute[254819]: 2025-12-06 10:01:36.757 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:01:36 np0005548915 nova_compute[254819]: 2025-12-06 10:01:36.880 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:01:36 np0005548915 nova_compute[254819]: 2025-12-06 10:01:36.881 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:01:36 np0005548915 nova_compute[254819]: 2025-12-06 10:01:36.958 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:01:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:01:37.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:01:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:01:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1251141983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:01:37 np0005548915 nova_compute[254819]: 2025-12-06 10:01:37.441 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:01:37 np0005548915 nova_compute[254819]: 2025-12-06 10:01:37.446 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:01:37 np0005548915 nova_compute[254819]: 2025-12-06 10:01:37.476 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:01:37 np0005548915 nova_compute[254819]: 2025-12-06 10:01:37.477 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:01:37 np0005548915 nova_compute[254819]: 2025-12-06 10:01:37.477 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:01:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:01:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 05:01:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:38.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:38.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240046e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:01:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:01:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:01:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:01:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:40.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:01:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:40.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:40] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 05:01:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:40] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 05:01:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:01:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:42.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:01:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:42.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004880 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:01:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:44.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:44.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 05:01:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240048a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:01:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:46.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:46.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:01:47.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:01:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:01:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec  6 05:01:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:48.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:48.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:01:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:01:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240048c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  6 05:01:50 np0005548915 podman[256236]: 2025-12-06 10:01:50.437736281 +0000 UTC m=+0.062666827 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  6 05:01:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:50.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:01:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:50.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:01:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:50] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 05:01:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:50] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 05:01:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 05:01:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240048e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v611: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  6 05:01:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:52.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:01:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:52.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200026b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:01:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:01:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:01:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:01:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:01:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:01:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:01:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:01:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004900 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:01:54.234 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:01:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:01:54.235 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:01:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:01:54.235 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:01:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v612: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 05:01:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:54.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:54.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:55 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.24542 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  6 05:01:55 np0005548915 ceph-mgr[74618]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  6 05:01:55 np0005548915 ceph-mgr[74618]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  6 05:01:55 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.24628 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  6 05:01:55 np0005548915 ceph-mgr[74618]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  6 05:01:55 np0005548915 ceph-mgr[74618]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  6 05:01:55 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.24628 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec  6 05:01:55 np0005548915 podman[256262]: 2025-12-06 10:01:55.448393977 +0000 UTC m=+0.075582414 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec  6 05:01:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200026d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec  6 05:01:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:56.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:56.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004920 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:01:57.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:01:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:01:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100157 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 05:01:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:01:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v614: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 05:01:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:58.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:01:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:01:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:58.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:01:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004940 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v615: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 05:02:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec  6 05:02:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec  6 05:02:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Dec  6 05:02:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Dec  6 05:02:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:00.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Dec  6 05:02:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Dec  6 05:02:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Dec  6 05:02:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Dec  6 05:02:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:00.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:00] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 05:02:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:00] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 05:02:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004960 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v616: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  6 05:02:02 np0005548915 podman[256296]: 2025-12-06 10:02:02.437339602 +0000 UTC m=+0.063284735 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 05:02:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:02:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:02.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:02:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:02:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:02:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:02.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:02:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 426 B/s wr, 131 op/s
Dec  6 05:02:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:04.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:04.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec  6 05:02:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:06.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:02:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:06.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:02:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:07.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:02:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:02:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v619: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec  6 05:02:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:08.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:02:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:08.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:02:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:02:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:02:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v620: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec  6 05:02:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:02:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:10.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:02:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:02:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:10.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:02:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:10] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  6 05:02:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:10] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  6 05:02:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v621: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec  6 05:02:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:12.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:02:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:02:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:12.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:02:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:13 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.24637 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  6 05:02:13 np0005548915 ceph-mgr[74618]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  6 05:02:13 np0005548915 ceph-mgr[74618]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  6 05:02:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec  6 05:02:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3432749316' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec  6 05:02:13 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.15012 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  6 05:02:13 np0005548915 ceph-mgr[74618]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  6 05:02:13 np0005548915 ceph-mgr[74618]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  6 05:02:13 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.15012 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec  6 05:02:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v622: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec  6 05:02:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:14.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:02:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:14.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:02:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v623: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:02:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:16.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:16.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:17.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:02:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:02:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v624: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 05:02:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:18.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:02:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:18.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:02:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v625: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:02:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:20.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:20.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:20] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  6 05:02:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:20] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  6 05:02:21 np0005548915 podman[256361]: 2025-12-06 10:02:21.444157394 +0000 UTC m=+0.076236991 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec  6 05:02:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v626: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:02:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:22.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:02:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:22.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:02:23
Dec  6 05:02:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:02:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:02:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.nfs', 'default.rgw.control', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.rgw.root', 'backups', '.mgr', 'cephfs.cephfs.data', 'vms']
Dec  6 05:02:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:02:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:02:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:02:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:02:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:02:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:02:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v627: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:02:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:02:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:24.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:24.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v628: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:02:26 np0005548915 podman[256386]: 2025-12-06 10:02:26.469530593 +0000 UTC m=+0.099908134 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 05:02:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:26.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:26.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:27.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:02:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:02:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v629: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 05:02:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:28.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:28.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v630: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:02:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:30.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:02:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:30.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:02:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:02:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:02:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:02:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:02:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:02:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:02:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:02:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:02:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:02:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:02:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:02:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:02:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:02:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:02:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:30] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 05:02:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:30] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 05:02:31 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:02:31 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:02:31 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:02:31 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:02:31 np0005548915 podman[256615]: 2025-12-06 10:02:31.412732723 +0000 UTC m=+0.047960605 container create a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_carson, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  6 05:02:31 np0005548915 systemd[1]: Started libpod-conmon-a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504.scope.
Dec  6 05:02:31 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:02:31 np0005548915 podman[256615]: 2025-12-06 10:02:31.395289126 +0000 UTC m=+0.030517028 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:02:31 np0005548915 podman[256615]: 2025-12-06 10:02:31.493743681 +0000 UTC m=+0.128971583 container init a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:02:31 np0005548915 podman[256615]: 2025-12-06 10:02:31.500969003 +0000 UTC m=+0.136196885 container start a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_carson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:02:31 np0005548915 podman[256615]: 2025-12-06 10:02:31.50415993 +0000 UTC m=+0.139387812 container attach a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_carson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:02:31 np0005548915 sharp_carson[256632]: 167 167
Dec  6 05:02:31 np0005548915 systemd[1]: libpod-a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504.scope: Deactivated successfully.
Dec  6 05:02:31 np0005548915 podman[256615]: 2025-12-06 10:02:31.506937614 +0000 UTC m=+0.142165496 container died a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_carson, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:02:31 np0005548915 systemd[1]: var-lib-containers-storage-overlay-468e4e0dfeed2a93f0e8290b73e209f12cd9d0eefc2814b195248c1a64b5f565-merged.mount: Deactivated successfully.
Dec  6 05:02:31 np0005548915 podman[256615]: 2025-12-06 10:02:31.561145564 +0000 UTC m=+0.196373496 container remove a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_carson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  6 05:02:31 np0005548915 systemd[1]: libpod-conmon-a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504.scope: Deactivated successfully.
Dec  6 05:02:31 np0005548915 podman[256657]: 2025-12-06 10:02:31.798838215 +0000 UTC m=+0.079634913 container create a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lederberg, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:02:31 np0005548915 systemd[1]: Started libpod-conmon-a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6.scope.
Dec  6 05:02:31 np0005548915 podman[256657]: 2025-12-06 10:02:31.768115333 +0000 UTC m=+0.048912031 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:02:31 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:02:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d42bc2446bbb33cbecc57469797c9f1b2ba74836e19c799bf50d8f5a1ce506/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:02:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d42bc2446bbb33cbecc57469797c9f1b2ba74836e19c799bf50d8f5a1ce506/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:02:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d42bc2446bbb33cbecc57469797c9f1b2ba74836e19c799bf50d8f5a1ce506/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:02:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d42bc2446bbb33cbecc57469797c9f1b2ba74836e19c799bf50d8f5a1ce506/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:02:31 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d42bc2446bbb33cbecc57469797c9f1b2ba74836e19c799bf50d8f5a1ce506/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:02:31 np0005548915 podman[256657]: 2025-12-06 10:02:31.917616743 +0000 UTC m=+0.198413451 container init a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lederberg, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:02:31 np0005548915 podman[256657]: 2025-12-06 10:02:31.925376201 +0000 UTC m=+0.206172919 container start a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  6 05:02:31 np0005548915 podman[256657]: 2025-12-06 10:02:31.929962624 +0000 UTC m=+0.210759302 container attach a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lederberg, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  6 05:02:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:32 np0005548915 heuristic_lederberg[256674]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:02:32 np0005548915 heuristic_lederberg[256674]: --> All data devices are unavailable
Dec  6 05:02:32 np0005548915 systemd[1]: libpod-a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6.scope: Deactivated successfully.
Dec  6 05:02:32 np0005548915 podman[256657]: 2025-12-06 10:02:32.247239574 +0000 UTC m=+0.528036302 container died a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lederberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  6 05:02:32 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f9d42bc2446bbb33cbecc57469797c9f1b2ba74836e19c799bf50d8f5a1ce506-merged.mount: Deactivated successfully.
Dec  6 05:02:32 np0005548915 podman[256657]: 2025-12-06 10:02:32.300548491 +0000 UTC m=+0.581345179 container remove a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  6 05:02:32 np0005548915 systemd[1]: libpod-conmon-a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6.scope: Deactivated successfully.
Dec  6 05:02:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v631: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:02:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:32.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:32 np0005548915 podman[256752]: 2025-12-06 10:02:32.594701992 +0000 UTC m=+0.074721641 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  6 05:02:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:02:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:02:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:32.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:02:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:32 np0005548915 podman[256812]: 2025-12-06 10:02:32.904598404 +0000 UTC m=+0.043690549 container create 40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  6 05:02:32 np0005548915 systemd[1]: Started libpod-conmon-40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e.scope.
Dec  6 05:02:32 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:02:32 np0005548915 podman[256812]: 2025-12-06 10:02:32.88533063 +0000 UTC m=+0.024422795 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:02:32 np0005548915 podman[256812]: 2025-12-06 10:02:32.990896114 +0000 UTC m=+0.129988279 container init 40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  6 05:02:32 np0005548915 podman[256812]: 2025-12-06 10:02:32.99934097 +0000 UTC m=+0.138433115 container start 40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Dec  6 05:02:33 np0005548915 podman[256812]: 2025-12-06 10:02:33.002885755 +0000 UTC m=+0.141977920 container attach 40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec  6 05:02:33 np0005548915 friendly_gauss[256828]: 167 167
Dec  6 05:02:33 np0005548915 systemd[1]: libpod-40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e.scope: Deactivated successfully.
Dec  6 05:02:33 np0005548915 podman[256812]: 2025-12-06 10:02:33.007033206 +0000 UTC m=+0.146125351 container died 40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:02:33 np0005548915 systemd[1]: var-lib-containers-storage-overlay-39906dccb330242ee339529710e9c74ef976e86014fe9f2c9c519523e6ef5cc5-merged.mount: Deactivated successfully.
Dec  6 05:02:33 np0005548915 podman[256812]: 2025-12-06 10:02:33.046503282 +0000 UTC m=+0.185595427 container remove 40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 05:02:33 np0005548915 systemd[1]: libpod-conmon-40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e.scope: Deactivated successfully.
Dec  6 05:02:33 np0005548915 podman[256853]: 2025-12-06 10:02:33.218452643 +0000 UTC m=+0.046161126 container create b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:02:33 np0005548915 systemd[1]: Started libpod-conmon-b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635.scope.
Dec  6 05:02:33 np0005548915 podman[256853]: 2025-12-06 10:02:33.200600776 +0000 UTC m=+0.028309289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:02:33 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:02:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/941f383457af65d9f184e9c3a99452a3cfce5d5f397cbf3afdc6d4403e6b6f49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:02:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/941f383457af65d9f184e9c3a99452a3cfce5d5f397cbf3afdc6d4403e6b6f49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:02:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/941f383457af65d9f184e9c3a99452a3cfce5d5f397cbf3afdc6d4403e6b6f49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:02:33 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/941f383457af65d9f184e9c3a99452a3cfce5d5f397cbf3afdc6d4403e6b6f49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:02:33 np0005548915 podman[256853]: 2025-12-06 10:02:33.313690782 +0000 UTC m=+0.141399275 container init b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ardinghelli, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:02:33 np0005548915 podman[256853]: 2025-12-06 10:02:33.32183381 +0000 UTC m=+0.149542293 container start b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:02:33 np0005548915 podman[256853]: 2025-12-06 10:02:33.325028286 +0000 UTC m=+0.152736769 container attach b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]: {
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:    "1": [
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:        {
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:            "devices": [
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:                "/dev/loop3"
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:            ],
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:            "lv_name": "ceph_lv0",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:            "lv_size": "21470642176",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:            "name": "ceph_lv0",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:            "tags": {
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:                "ceph.cluster_name": "ceph",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:                "ceph.crush_device_class": "",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:                "ceph.encrypted": "0",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:                "ceph.osd_id": "1",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:                "ceph.type": "block",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:                "ceph.vdo": "0",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:                "ceph.with_tpm": "0"
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:            },
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:            "type": "block",
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:            "vg_name": "ceph_vg0"
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:        }
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]:    ]
Dec  6 05:02:33 np0005548915 mystifying_ardinghelli[256870]: }
Dec  6 05:02:33 np0005548915 systemd[1]: libpod-b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635.scope: Deactivated successfully.
Dec  6 05:02:33 np0005548915 conmon[256870]: conmon b794c7ed814456680189 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635.scope/container/memory.events
Dec  6 05:02:33 np0005548915 podman[256853]: 2025-12-06 10:02:33.624120989 +0000 UTC m=+0.451829472 container died b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ardinghelli, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:02:33 np0005548915 systemd[1]: var-lib-containers-storage-overlay-941f383457af65d9f184e9c3a99452a3cfce5d5f397cbf3afdc6d4403e6b6f49-merged.mount: Deactivated successfully.
Dec  6 05:02:33 np0005548915 podman[256853]: 2025-12-06 10:02:33.671931748 +0000 UTC m=+0.499640231 container remove b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:02:33 np0005548915 systemd[1]: libpod-conmon-b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635.scope: Deactivated successfully.
Dec  6 05:02:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:34 np0005548915 podman[256981]: 2025-12-06 10:02:34.220794466 +0000 UTC m=+0.040115644 container create 5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_darwin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  6 05:02:34 np0005548915 systemd[1]: Started libpod-conmon-5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52.scope.
Dec  6 05:02:34 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:02:34 np0005548915 podman[256981]: 2025-12-06 10:02:34.294169989 +0000 UTC m=+0.113491187 container init 5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_darwin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:02:34 np0005548915 podman[256981]: 2025-12-06 10:02:34.205048015 +0000 UTC m=+0.024369203 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:02:34 np0005548915 podman[256981]: 2025-12-06 10:02:34.300586592 +0000 UTC m=+0.119907780 container start 5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_darwin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:02:34 np0005548915 podman[256981]: 2025-12-06 10:02:34.304085515 +0000 UTC m=+0.123406743 container attach 5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_darwin, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:02:34 np0005548915 jovial_darwin[256998]: 167 167
Dec  6 05:02:34 np0005548915 systemd[1]: libpod-5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52.scope: Deactivated successfully.
Dec  6 05:02:34 np0005548915 podman[256981]: 2025-12-06 10:02:34.307015983 +0000 UTC m=+0.126337161 container died 5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_darwin, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:02:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v632: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 05:02:34 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ffd8fd9b38b8c32c0a50b02e9d1f04a9f7bcda719f427b3aca6a5fca5e077c23-merged.mount: Deactivated successfully.
Dec  6 05:02:34 np0005548915 podman[256981]: 2025-12-06 10:02:34.340636413 +0000 UTC m=+0.159957591 container remove 5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_darwin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec  6 05:02:34 np0005548915 systemd[1]: libpod-conmon-5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52.scope: Deactivated successfully.
Dec  6 05:02:34 np0005548915 podman[257021]: 2025-12-06 10:02:34.521541665 +0000 UTC m=+0.038187054 container create a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:02:34 np0005548915 systemd[1]: Started libpod-conmon-a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9.scope.
Dec  6 05:02:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:02:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:34.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:02:34 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:02:34 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/933cf4d84fcce698979f58ffcddb25023e44f446c4f03acb1e3e4865db854e70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:02:34 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/933cf4d84fcce698979f58ffcddb25023e44f446c4f03acb1e3e4865db854e70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:02:34 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/933cf4d84fcce698979f58ffcddb25023e44f446c4f03acb1e3e4865db854e70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:02:34 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/933cf4d84fcce698979f58ffcddb25023e44f446c4f03acb1e3e4865db854e70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:02:34 np0005548915 podman[257021]: 2025-12-06 10:02:34.505245488 +0000 UTC m=+0.021890877 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:02:34 np0005548915 podman[257021]: 2025-12-06 10:02:34.608215214 +0000 UTC m=+0.124860633 container init a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 05:02:34 np0005548915 podman[257021]: 2025-12-06 10:02:34.615466977 +0000 UTC m=+0.132112406 container start a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Dec  6 05:02:34 np0005548915 podman[257021]: 2025-12-06 10:02:34.619816144 +0000 UTC m=+0.136461553 container attach a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  6 05:02:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:02:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:34.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:02:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:35 np0005548915 lvm[257112]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:02:35 np0005548915 lvm[257112]: VG ceph_vg0 finished
Dec  6 05:02:35 np0005548915 lucid_bouman[257037]: {}
Dec  6 05:02:35 np0005548915 systemd[1]: libpod-a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9.scope: Deactivated successfully.
Dec  6 05:02:35 np0005548915 systemd[1]: libpod-a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9.scope: Consumed 1.200s CPU time.
Dec  6 05:02:35 np0005548915 podman[257021]: 2025-12-06 10:02:35.338518486 +0000 UTC m=+0.855163875 container died a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:02:35 np0005548915 systemd[1]: var-lib-containers-storage-overlay-933cf4d84fcce698979f58ffcddb25023e44f446c4f03acb1e3e4865db854e70-merged.mount: Deactivated successfully.
Dec  6 05:02:35 np0005548915 podman[257021]: 2025-12-06 10:02:35.378820535 +0000 UTC m=+0.895465944 container remove a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:02:35 np0005548915 systemd[1]: libpod-conmon-a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9.scope: Deactivated successfully.
Dec  6 05:02:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:02:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:02:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:02:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:02:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v633: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:02:36 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:02:36 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:02:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:02:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:36.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:02:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:36.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:37.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.468 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.513 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.513 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.514 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.526 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.527 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.528 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.528 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.529 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.529 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.529 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.530 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.559 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.560 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.560 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.560 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:02:37 np0005548915 nova_compute[254819]: 2025-12-06 10:02:37.561 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:02:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:02:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:02:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2516502700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:02:38 np0005548915 nova_compute[254819]: 2025-12-06 10:02:38.039 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:02:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:38 np0005548915 nova_compute[254819]: 2025-12-06 10:02:38.231 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:02:38 np0005548915 nova_compute[254819]: 2025-12-06 10:02:38.233 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4937MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:02:38 np0005548915 nova_compute[254819]: 2025-12-06 10:02:38.233 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:02:38 np0005548915 nova_compute[254819]: 2025-12-06 10:02:38.234 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:02:38 np0005548915 nova_compute[254819]: 2025-12-06 10:02:38.296 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:02:38 np0005548915 nova_compute[254819]: 2025-12-06 10:02:38.297 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:02:38 np0005548915 nova_compute[254819]: 2025-12-06 10:02:38.317 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:02:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v634: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 05:02:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:38.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:38.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:02:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4280590991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:02:38 np0005548915 nova_compute[254819]: 2025-12-06 10:02:38.827 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:02:38 np0005548915 nova_compute[254819]: 2025-12-06 10:02:38.833 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:02:38 np0005548915 nova_compute[254819]: 2025-12-06 10:02:38.857 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:02:38 np0005548915 nova_compute[254819]: 2025-12-06 10:02:38.858 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:02:38 np0005548915 nova_compute[254819]: 2025-12-06 10:02:38.859 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:02:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:02:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:02:39 np0005548915 nova_compute[254819]: 2025-12-06 10:02:39.080 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:02:39 np0005548915 nova_compute[254819]: 2025-12-06 10:02:39.080 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:02:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:02:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2010034084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:02:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v635: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:02:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:40.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:40.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:40] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 05:02:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:40] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 05:02:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v636: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:02:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:42.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:02:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:02:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:42.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:02:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v637: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 05:02:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:02:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:44.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:02:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:02:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:44.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:02:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v638: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:02:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:46.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:02:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:46.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:02:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:47.233Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:02:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:47.233Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:02:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:02:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v639: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 05:02:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100248 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:02:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:48.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:02:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:48.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:02:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v640: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:02:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:50.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:02:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:50.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:02:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:50] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 05:02:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:50] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 05:02:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v641: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:02:52 np0005548915 podman[257242]: 2025-12-06 10:02:52.45275461 +0000 UTC m=+0.072517287 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 05:02:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:02:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:52.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:02:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:52.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:02:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:02:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:02:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:02:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:02:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:02:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:02:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:02:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:02:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:02:54.235 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:02:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:02:54.235 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:02:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:02:54.235 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:02:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v642: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 05:02:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:02:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:54.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:02:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:02:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:54.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:02:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v643: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 05:02:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:56.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:02:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:56.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:02:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:57.234Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:02:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:57.234Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:02:57 np0005548915 podman[257267]: 2025-12-06 10:02:57.454072177 +0000 UTC m=+0.088092229 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:02:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:02:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:02:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 05:02:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v644: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:02:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:58.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:02:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:02:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:58.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:02:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v645: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:03:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:00.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:00.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:00] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 05:03:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:00] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 05:03:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:03:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:03:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v646: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:03:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:03:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:02.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:03:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:02.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:03:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:03 np0005548915 podman[257300]: 2025-12-06 10:03:03.41320624 +0000 UTC m=+0.050562965 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  6 05:03:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v647: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 05:03:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 05:03:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:04.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:04.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v648: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 05:03:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:06.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:06.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:06 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:03:06.954 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:03:06 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:03:06.956 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 05:03:06 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:03:06.957 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:03:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:03:07.235Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:03:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:03:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v649: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 05:03:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:08.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:03:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:08.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:03:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:03:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:03:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v650: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec  6 05:03:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100310 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 05:03:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:10.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:10.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:10] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 05:03:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:10] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 05:03:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v651: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec  6 05:03:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:03:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:12.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:12.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v652: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec  6 05:03:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:14.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:03:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:14.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:03:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v653: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:03:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:16.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:16.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:03:17.236Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:03:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:03:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v654: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:03:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:03:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:18.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:03:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:18.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v655: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:20.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:20.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 05:03:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  6 05:03:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v656: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:03:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:22.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:03:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:22.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:03:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:23 np0005548915 podman[257366]: 2025-12-06 10:03:23.45792821 +0000 UTC m=+0.075685083 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:03:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:03:23
Dec  6 05:03:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:03:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:03:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', '.nfs', 'images', 'vms', '.rgw.root']
Dec  6 05:03:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:03:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:03:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:03:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:03:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:03:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:03:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v657: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:03:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:03:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:24.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:24.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004700 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v658: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:03:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:26.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:03:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:26.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:03:27.236Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:03:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:03:27 np0005548915 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec  6 05:03:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004720 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v659: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:03:28 np0005548915 podman[257389]: 2025-12-06 10:03:28.45532544 +0000 UTC m=+0.093160964 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 05:03:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:03:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:28.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:03:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:28.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v660: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:03:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:30.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:03:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:30.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:30] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  6 05:03:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:30] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  6 05:03:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v661: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:03:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:32.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:32.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004760 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:34 np0005548915 podman[257446]: 2025-12-06 10:03:34.208396652 +0000 UTC m=+0.047840592 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 05:03:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v662: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:34.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:34.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:35 np0005548915 nova_compute[254819]: 2025-12-06 10:03:35.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:03:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v663: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:03:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:36.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:36.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:36 np0005548915 nova_compute[254819]: 2025-12-06 10:03:36.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:03:36 np0005548915 nova_compute[254819]: 2025-12-06 10:03:36.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:03:36 np0005548915 nova_compute[254819]: 2025-12-06 10:03:36.772 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:03:36 np0005548915 nova_compute[254819]: 2025-12-06 10:03:36.772 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:03:36 np0005548915 nova_compute[254819]: 2025-12-06 10:03:36.773 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:03:36 np0005548915 nova_compute[254819]: 2025-12-06 10:03:36.773 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:03:36 np0005548915 nova_compute[254819]: 2025-12-06 10:03:36.773 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:03:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:03:36 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:03:37 np0005548915 podman[257663]: 2025-12-06 10:03:37.036589677 +0000 UTC m=+0.041644854 container create 4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 05:03:37 np0005548915 systemd[1]: Started libpod-conmon-4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6.scope.
Dec  6 05:03:37 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:03:37 np0005548915 podman[257663]: 2025-12-06 10:03:37.109439763 +0000 UTC m=+0.114494980 container init 4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  6 05:03:37 np0005548915 podman[257663]: 2025-12-06 10:03:37.015398036 +0000 UTC m=+0.020453393 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:03:37 np0005548915 podman[257663]: 2025-12-06 10:03:37.117627304 +0000 UTC m=+0.122682481 container start 4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_torvalds, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:03:37 np0005548915 podman[257663]: 2025-12-06 10:03:37.120401239 +0000 UTC m=+0.125456466 container attach 4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  6 05:03:37 np0005548915 gallant_torvalds[257679]: 167 167
Dec  6 05:03:37 np0005548915 systemd[1]: libpod-4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6.scope: Deactivated successfully.
Dec  6 05:03:37 np0005548915 podman[257663]: 2025-12-06 10:03:37.124561621 +0000 UTC m=+0.129616828 container died 4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  6 05:03:37 np0005548915 systemd[1]: var-lib-containers-storage-overlay-b8fa4ef833de826db19c2f441f77d62d91c7e2f15086dca401d84bab21596192-merged.mount: Deactivated successfully.
Dec  6 05:03:37 np0005548915 podman[257663]: 2025-12-06 10:03:37.169247207 +0000 UTC m=+0.174302394 container remove 4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_torvalds, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 05:03:37 np0005548915 systemd[1]: libpod-conmon-4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6.scope: Deactivated successfully.
Dec  6 05:03:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:03:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2391830354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:03:37 np0005548915 nova_compute[254819]: 2025-12-06 10:03:37.232 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:03:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:03:37.238Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:03:37 np0005548915 podman[257705]: 2025-12-06 10:03:37.356808538 +0000 UTC m=+0.044983214 container create 9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lovelace, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:03:37 np0005548915 systemd[1]: Started libpod-conmon-9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426.scope.
Dec  6 05:03:37 np0005548915 podman[257705]: 2025-12-06 10:03:37.337268681 +0000 UTC m=+0.025443397 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:03:37 np0005548915 nova_compute[254819]: 2025-12-06 10:03:37.433 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:03:37 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:03:37 np0005548915 nova_compute[254819]: 2025-12-06 10:03:37.436 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4865MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:03:37 np0005548915 nova_compute[254819]: 2025-12-06 10:03:37.437 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:03:37 np0005548915 nova_compute[254819]: 2025-12-06 10:03:37.437 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:03:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4415406fc8c439408b0a7b1ec67ca974922c6a87924578e0dac95c6e949d989/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:03:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4415406fc8c439408b0a7b1ec67ca974922c6a87924578e0dac95c6e949d989/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:03:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4415406fc8c439408b0a7b1ec67ca974922c6a87924578e0dac95c6e949d989/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:03:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4415406fc8c439408b0a7b1ec67ca974922c6a87924578e0dac95c6e949d989/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:03:37 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4415406fc8c439408b0a7b1ec67ca974922c6a87924578e0dac95c6e949d989/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:03:37 np0005548915 podman[257705]: 2025-12-06 10:03:37.454631628 +0000 UTC m=+0.142806324 container init 9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lovelace, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Dec  6 05:03:37 np0005548915 podman[257705]: 2025-12-06 10:03:37.461845962 +0000 UTC m=+0.150020648 container start 9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lovelace, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:03:37 np0005548915 podman[257705]: 2025-12-06 10:03:37.464574386 +0000 UTC m=+0.152749202 container attach 9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 05:03:37 np0005548915 nova_compute[254819]: 2025-12-06 10:03:37.550 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:03:37 np0005548915 nova_compute[254819]: 2025-12-06 10:03:37.551 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:03:37 np0005548915 nova_compute[254819]: 2025-12-06 10:03:37.569 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:03:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:03:37 np0005548915 mystifying_lovelace[257721]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:03:37 np0005548915 mystifying_lovelace[257721]: --> All data devices are unavailable
Dec  6 05:03:37 np0005548915 systemd[1]: libpod-9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426.scope: Deactivated successfully.
Dec  6 05:03:37 np0005548915 podman[257705]: 2025-12-06 10:03:37.824307984 +0000 UTC m=+0.512482680 container died 9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:03:37 np0005548915 systemd[1]: var-lib-containers-storage-overlay-b4415406fc8c439408b0a7b1ec67ca974922c6a87924578e0dac95c6e949d989-merged.mount: Deactivated successfully.
Dec  6 05:03:37 np0005548915 podman[257705]: 2025-12-06 10:03:37.874980221 +0000 UTC m=+0.563154907 container remove 9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:03:37 np0005548915 systemd[1]: libpod-conmon-9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426.scope: Deactivated successfully.
Dec  6 05:03:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:03:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3270802321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:03:38 np0005548915 nova_compute[254819]: 2025-12-06 10:03:38.028 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:03:38 np0005548915 nova_compute[254819]: 2025-12-06 10:03:38.034 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:03:38 np0005548915 nova_compute[254819]: 2025-12-06 10:03:38.048 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:03:38 np0005548915 nova_compute[254819]: 2025-12-06 10:03:38.050 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:03:38 np0005548915 nova_compute[254819]: 2025-12-06 10:03:38.050 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:03:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v664: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:03:38 np0005548915 podman[257861]: 2025-12-06 10:03:38.513323416 +0000 UTC m=+0.050544036 container create 1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hugle, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:03:38 np0005548915 systemd[1]: Started libpod-conmon-1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e.scope.
Dec  6 05:03:38 np0005548915 podman[257861]: 2025-12-06 10:03:38.493423478 +0000 UTC m=+0.030644098 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:03:38 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:03:38 np0005548915 podman[257861]: 2025-12-06 10:03:38.621232957 +0000 UTC m=+0.158453667 container init 1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  6 05:03:38 np0005548915 podman[257861]: 2025-12-06 10:03:38.63282586 +0000 UTC m=+0.170046510 container start 1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  6 05:03:38 np0005548915 elastic_hugle[257877]: 167 167
Dec  6 05:03:38 np0005548915 podman[257861]: 2025-12-06 10:03:38.637431294 +0000 UTC m=+0.174651954 container attach 1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hugle, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  6 05:03:38 np0005548915 systemd[1]: libpod-1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e.scope: Deactivated successfully.
Dec  6 05:03:38 np0005548915 conmon[257877]: conmon 1d16e8bb005b0e1a2e5e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e.scope/container/memory.events
Dec  6 05:03:38 np0005548915 podman[257861]: 2025-12-06 10:03:38.640061006 +0000 UTC m=+0.177281656 container died 1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:03:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:03:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:38.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:03:38 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1e732b3830518341a6b166182c596437093311d0bc7825d75664ede4ccaca28a-merged.mount: Deactivated successfully.
Dec  6 05:03:38 np0005548915 podman[257861]: 2025-12-06 10:03:38.694315349 +0000 UTC m=+0.231535969 container remove 1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hugle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 05:03:38 np0005548915 systemd[1]: libpod-conmon-1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e.scope: Deactivated successfully.
Dec  6 05:03:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:38.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:38 np0005548915 podman[257900]: 2025-12-06 10:03:38.886240789 +0000 UTC m=+0.043604278 container create 2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_goodall, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:03:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:03:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:03:38 np0005548915 systemd[1]: Started libpod-conmon-2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471.scope.
Dec  6 05:03:38 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:03:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784f4b4fadee7f376aaf8f29cb04b6f9287f020b1b286ff8a1446547550bd5c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:03:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784f4b4fadee7f376aaf8f29cb04b6f9287f020b1b286ff8a1446547550bd5c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:03:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784f4b4fadee7f376aaf8f29cb04b6f9287f020b1b286ff8a1446547550bd5c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:03:38 np0005548915 podman[257900]: 2025-12-06 10:03:38.868276473 +0000 UTC m=+0.025639982 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:03:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784f4b4fadee7f376aaf8f29cb04b6f9287f020b1b286ff8a1446547550bd5c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:03:38 np0005548915 podman[257900]: 2025-12-06 10:03:38.974611003 +0000 UTC m=+0.131974512 container init 2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:03:38 np0005548915 podman[257900]: 2025-12-06 10:03:38.984200862 +0000 UTC m=+0.141564351 container start 2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:03:38 np0005548915 podman[257900]: 2025-12-06 10:03:38.987819449 +0000 UTC m=+0.145182958 container attach 2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:03:39 np0005548915 nova_compute[254819]: 2025-12-06 10:03:39.050 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:03:39 np0005548915 nova_compute[254819]: 2025-12-06 10:03:39.051 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:03:39 np0005548915 nova_compute[254819]: 2025-12-06 10:03:39.051 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:03:39 np0005548915 nova_compute[254819]: 2025-12-06 10:03:39.068 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 05:03:39 np0005548915 nova_compute[254819]: 2025-12-06 10:03:39.069 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:03:39 np0005548915 nova_compute[254819]: 2025-12-06 10:03:39.069 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:03:39 np0005548915 nova_compute[254819]: 2025-12-06 10:03:39.069 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:03:39 np0005548915 nova_compute[254819]: 2025-12-06 10:03:39.069 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:03:39 np0005548915 nova_compute[254819]: 2025-12-06 10:03:39.069 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]: {
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:    "1": [
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:        {
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:            "devices": [
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:                "/dev/loop3"
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:            ],
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:            "lv_name": "ceph_lv0",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:            "lv_size": "21470642176",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:            "name": "ceph_lv0",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:            "tags": {
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:                "ceph.cluster_name": "ceph",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:                "ceph.crush_device_class": "",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:                "ceph.encrypted": "0",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:                "ceph.osd_id": "1",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:                "ceph.type": "block",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:                "ceph.vdo": "0",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:                "ceph.with_tpm": "0"
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:            },
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:            "type": "block",
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:            "vg_name": "ceph_vg0"
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:        }
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]:    ]
Dec  6 05:03:39 np0005548915 adoring_goodall[257917]: }
Dec  6 05:03:39 np0005548915 systemd[1]: libpod-2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471.scope: Deactivated successfully.
Dec  6 05:03:39 np0005548915 podman[257900]: 2025-12-06 10:03:39.273871568 +0000 UTC m=+0.431235057 container died 2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec  6 05:03:39 np0005548915 systemd[1]: var-lib-containers-storage-overlay-784f4b4fadee7f376aaf8f29cb04b6f9287f020b1b286ff8a1446547550bd5c5-merged.mount: Deactivated successfully.
Dec  6 05:03:39 np0005548915 podman[257900]: 2025-12-06 10:03:39.322050578 +0000 UTC m=+0.479414067 container remove 2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_goodall, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  6 05:03:39 np0005548915 systemd[1]: libpod-conmon-2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471.scope: Deactivated successfully.
Dec  6 05:03:39 np0005548915 nova_compute[254819]: 2025-12-06 10:03:39.761 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:03:39 np0005548915 podman[258031]: 2025-12-06 10:03:39.930375382 +0000 UTC m=+0.051969892 container create e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  6 05:03:39 np0005548915 systemd[1]: Started libpod-conmon-e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4.scope.
Dec  6 05:03:39 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:03:39 np0005548915 podman[258031]: 2025-12-06 10:03:39.90656097 +0000 UTC m=+0.028155520 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:03:40 np0005548915 podman[258031]: 2025-12-06 10:03:40.016751893 +0000 UTC m=+0.138346413 container init e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 05:03:40 np0005548915 podman[258031]: 2025-12-06 10:03:40.029426696 +0000 UTC m=+0.151021196 container start e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 05:03:40 np0005548915 podman[258031]: 2025-12-06 10:03:40.033236608 +0000 UTC m=+0.154831138 container attach e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:03:40 np0005548915 laughing_vaughan[258048]: 167 167
Dec  6 05:03:40 np0005548915 systemd[1]: libpod-e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4.scope: Deactivated successfully.
Dec  6 05:03:40 np0005548915 podman[258031]: 2025-12-06 10:03:40.034934544 +0000 UTC m=+0.156529044 container died e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 05:03:40 np0005548915 systemd[1]: var-lib-containers-storage-overlay-9d6dcd015579376f6be1ce1d725de4f82563e5efdb900858466230d697873ff2-merged.mount: Deactivated successfully.
Dec  6 05:03:40 np0005548915 podman[258031]: 2025-12-06 10:03:40.083993088 +0000 UTC m=+0.205587588 container remove e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:03:40 np0005548915 systemd[1]: libpod-conmon-e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4.scope: Deactivated successfully.
Dec  6 05:03:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:40 np0005548915 podman[258072]: 2025-12-06 10:03:40.288229929 +0000 UTC m=+0.061352457 container create 699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:03:40 np0005548915 systemd[1]: Started libpod-conmon-699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de.scope.
Dec  6 05:03:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v665: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:40 np0005548915 podman[258072]: 2025-12-06 10:03:40.258736013 +0000 UTC m=+0.031858601 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:03:40 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:03:40 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7d2e5b1e3b022a8e4f04651b8f3df83daf854c2bb4358cd1d2a07d610d39d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:03:40 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7d2e5b1e3b022a8e4f04651b8f3df83daf854c2bb4358cd1d2a07d610d39d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:03:40 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7d2e5b1e3b022a8e4f04651b8f3df83daf854c2bb4358cd1d2a07d610d39d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:03:40 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7d2e5b1e3b022a8e4f04651b8f3df83daf854c2bb4358cd1d2a07d610d39d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:03:40 np0005548915 podman[258072]: 2025-12-06 10:03:40.399071041 +0000 UTC m=+0.172193599 container init 699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shockley, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  6 05:03:40 np0005548915 podman[258072]: 2025-12-06 10:03:40.406228743 +0000 UTC m=+0.179351261 container start 699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 05:03:40 np0005548915 podman[258072]: 2025-12-06 10:03:40.409973945 +0000 UTC m=+0.183096463 container attach 699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:03:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:40.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:03:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:40.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:03:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:40] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 05:03:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:40] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 05:03:41 np0005548915 lvm[258165]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:03:41 np0005548915 lvm[258165]: VG ceph_vg0 finished
Dec  6 05:03:41 np0005548915 agitated_shockley[258089]: {}
Dec  6 05:03:41 np0005548915 systemd[1]: libpod-699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de.scope: Deactivated successfully.
Dec  6 05:03:41 np0005548915 systemd[1]: libpod-699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de.scope: Consumed 1.300s CPU time.
Dec  6 05:03:41 np0005548915 podman[258072]: 2025-12-06 10:03:41.162592423 +0000 UTC m=+0.935714951 container died 699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  6 05:03:41 np0005548915 systemd[1]: var-lib-containers-storage-overlay-0c7d2e5b1e3b022a8e4f04651b8f3df83daf854c2bb4358cd1d2a07d610d39d0-merged.mount: Deactivated successfully.
Dec  6 05:03:41 np0005548915 podman[258072]: 2025-12-06 10:03:41.217071713 +0000 UTC m=+0.990194221 container remove 699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 05:03:41 np0005548915 systemd[1]: libpod-conmon-699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de.scope: Deactivated successfully.
Dec  6 05:03:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:03:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:03:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:03:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:03:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:42 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:03:42 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:03:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v666: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:03:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:42.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:42.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v667: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:44.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:44.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v668: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:46.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:46.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:03:47.239Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:03:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:03:47.240Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:03:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:03:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v669: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:03:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:48.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:03:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:48.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:03:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v670: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:03:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:50.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:03:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:03:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:50.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:03:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:50] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 05:03:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:50] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 05:03:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v671: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:03:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:52.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:52.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:03:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:03:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:03:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:03:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:03:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:03:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:03:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:03:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:03:54.236 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:03:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:03:54.237 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:03:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:03:54.237 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:03:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v672: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:54 np0005548915 podman[258245]: 2025-12-06 10:03:54.481607195 +0000 UTC m=+0.100951035 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true)
Dec  6 05:03:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:54.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:54.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v673: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:03:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:56.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:56.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:03:57.241Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:03:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:03:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v674: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:03:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:03:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:58.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:03:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:03:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:03:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:58.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:03:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:03:59 np0005548915 podman[258270]: 2025-12-06 10:03:59.509988942 +0000 UTC m=+0.138773887 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  6 05:03:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=cleanup t=2025-12-06T10:03:59.586882406Z level=info msg="Completed cleanup jobs" duration=22.684302ms
Dec  6 05:03:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=plugins.update.checker t=2025-12-06T10:03:59.701658563Z level=info msg="Update check succeeded" duration=75.390685ms
Dec  6 05:03:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafana.update.checker t=2025-12-06T10:03:59.717774508Z level=info msg="Update check succeeded" duration=52.548998ms
Dec  6 05:04:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v675: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:04:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:00.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:04:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:00.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:00] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 05:04:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:00] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec  6 05:04:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v676: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:04:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:02.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:04:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:02.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:04:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v677: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:04 np0005548915 podman[258301]: 2025-12-06 10:04:04.40950293 +0000 UTC m=+0.044294917 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  6 05:04:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:04.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:04.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.792134) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015444792200, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2107, "num_deletes": 251, "total_data_size": 4037785, "memory_usage": 4094512, "flush_reason": "Manual Compaction"}
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Dec  6 05:04:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015444824674, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 3957290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20102, "largest_seqno": 22208, "table_properties": {"data_size": 3947958, "index_size": 5826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19302, "raw_average_key_size": 20, "raw_value_size": 3929213, "raw_average_value_size": 4084, "num_data_blocks": 257, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015229, "oldest_key_time": 1765015229, "file_creation_time": 1765015444, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 32572 microseconds, and 10396 cpu microseconds.
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.824719) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 3957290 bytes OK
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.824737) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.830029) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.830041) EVENT_LOG_v1 {"time_micros": 1765015444830038, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.830085) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4029193, prev total WAL file size 4029193, number of live WAL files 2.
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.831042) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(3864KB)], [44(13MB)]
Dec  6 05:04:04 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015444831072, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 17911753, "oldest_snapshot_seqno": -1}
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5479 keys, 15736574 bytes, temperature: kUnknown
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015445002704, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 15736574, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15697085, "index_size": 24659, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 138046, "raw_average_key_size": 25, "raw_value_size": 15595058, "raw_average_value_size": 2846, "num_data_blocks": 1018, "num_entries": 5479, "num_filter_entries": 5479, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015444, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.003057) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 15736574 bytes
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.004638) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.3 rd, 91.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 13.3 +0.0 blob) out(15.0 +0.0 blob), read-write-amplify(8.5) write-amplify(4.0) OK, records in: 5995, records dropped: 516 output_compression: NoCompression
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.004663) EVENT_LOG_v1 {"time_micros": 1765015445004651, "job": 22, "event": "compaction_finished", "compaction_time_micros": 171742, "compaction_time_cpu_micros": 29810, "output_level": 6, "num_output_files": 1, "total_output_size": 15736574, "num_input_records": 5995, "num_output_records": 5479, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015445005988, "job": 22, "event": "table_file_deletion", "file_number": 46}
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015445009364, "job": 22, "event": "table_file_deletion", "file_number": 44}
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.830940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.009452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.009462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.009465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.009468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:04:05 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.009471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:04:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v678: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:04:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:06.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:04:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:06.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:07.241Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:04:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:04:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v679: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:04:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:08.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:04:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:08.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:04:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:04:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:04:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v680: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:10.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:10.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004840 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:10] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  6 05:04:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:10] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  6 05:04:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v681: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:04:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:12.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:04:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:12.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:04:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v682: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:14.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:14.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v683: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:16.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:04:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:16.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:04:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:17.243Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:04:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:17.244Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:04:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:04:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v684: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:04:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:18.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:18.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v685: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:04:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:20.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:04:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:20.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:20] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  6 05:04:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:20] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  6 05:04:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v686: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:04:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:22.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:22.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:04:23
Dec  6 05:04:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:04:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:04:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.log', 'default.rgw.control', '.nfs', 'vms']
Dec  6 05:04:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:04:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:04:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:04:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:04:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:04:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:04:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v687: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:04:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:04:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:24.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:24.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:25 np0005548915 podman[258370]: 2025-12-06 10:04:25.431345255 +0000 UTC m=+0.060253467 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 05:04:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v688: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:04:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:26.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:04:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:26.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:27.245Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:04:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:04:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v689: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:04:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:28.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:28.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v690: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:30 np0005548915 podman[258395]: 2025-12-06 10:04:30.450082009 +0000 UTC m=+0.080791291 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  6 05:04:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:30.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:30.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 05:04:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  6 05:04:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v691: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:04:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:32.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:32.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v692: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:04:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:34.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:04:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:34.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:35 np0005548915 podman[258452]: 2025-12-06 10:04:35.417367185 +0000 UTC m=+0.048340625 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 05:04:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v693: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:36 np0005548915 nova_compute[254819]: 2025-12-06 10:04:36.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:04:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:36.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:36.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:37.246Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.714034) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015477714070, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 499, "num_deletes": 251, "total_data_size": 561369, "memory_usage": 569888, "flush_reason": "Manual Compaction"}
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015477718234, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 393629, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22210, "largest_seqno": 22707, "table_properties": {"data_size": 391065, "index_size": 600, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6539, "raw_average_key_size": 19, "raw_value_size": 385955, "raw_average_value_size": 1148, "num_data_blocks": 27, "num_entries": 336, "num_filter_entries": 336, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015445, "oldest_key_time": 1765015445, "file_creation_time": 1765015477, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 4225 microseconds, and 1586 cpu microseconds.
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.718264) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 393629 bytes OK
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.718277) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.721421) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.721436) EVENT_LOG_v1 {"time_micros": 1765015477721430, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.721454) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 558525, prev total WAL file size 558525, number of live WAL files 2.
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.721997) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(384KB)], [47(15MB)]
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015477722065, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 16130203, "oldest_snapshot_seqno": -1}
Dec  6 05:04:37 np0005548915 nova_compute[254819]: 2025-12-06 10:04:37.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:04:37 np0005548915 nova_compute[254819]: 2025-12-06 10:04:37.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:04:37 np0005548915 nova_compute[254819]: 2025-12-06 10:04:37.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:04:37 np0005548915 nova_compute[254819]: 2025-12-06 10:04:37.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:04:37 np0005548915 nova_compute[254819]: 2025-12-06 10:04:37.776 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:04:37 np0005548915 nova_compute[254819]: 2025-12-06 10:04:37.776 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:04:37 np0005548915 nova_compute[254819]: 2025-12-06 10:04:37.776 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:04:37 np0005548915 nova_compute[254819]: 2025-12-06 10:04:37.777 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:04:37 np0005548915 nova_compute[254819]: 2025-12-06 10:04:37.777 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5315 keys, 12216148 bytes, temperature: kUnknown
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015477864994, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 12216148, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12182049, "index_size": 19717, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 135017, "raw_average_key_size": 25, "raw_value_size": 12087106, "raw_average_value_size": 2274, "num_data_blocks": 802, "num_entries": 5315, "num_filter_entries": 5315, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015477, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.865423) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12216148 bytes
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.867454) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.7 rd, 85.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 15.0 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(72.0) write-amplify(31.0) OK, records in: 5815, records dropped: 500 output_compression: NoCompression
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.867506) EVENT_LOG_v1 {"time_micros": 1765015477867470, "job": 24, "event": "compaction_finished", "compaction_time_micros": 143173, "compaction_time_cpu_micros": 36491, "output_level": 6, "num_output_files": 1, "total_output_size": 12216148, "num_input_records": 5815, "num_output_records": 5315, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015477868206, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015477872442, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.721887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.872701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.872707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.872709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.872710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:04:37 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.872712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:04:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:38 np0005548915 nova_compute[254819]: 2025-12-06 10:04:38.242 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:04:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v694: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:04:38 np0005548915 nova_compute[254819]: 2025-12-06 10:04:38.422 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:04:38 np0005548915 nova_compute[254819]: 2025-12-06 10:04:38.423 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4908MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:04:38 np0005548915 nova_compute[254819]: 2025-12-06 10:04:38.424 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:04:38 np0005548915 nova_compute[254819]: 2025-12-06 10:04:38.424 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:04:38 np0005548915 nova_compute[254819]: 2025-12-06 10:04:38.481 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:04:38 np0005548915 nova_compute[254819]: 2025-12-06 10:04:38.482 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:04:38 np0005548915 nova_compute[254819]: 2025-12-06 10:04:38.496 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:04:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=404 latency=0.002000053s ======
Dec  6 05:04:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:38.725 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.002000053s
Dec  6 05:04:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:04:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - - [06/Dec/2025:10:04:38.740 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.001000027s
Dec  6 05:04:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:38.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:38.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:04:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:04:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:04:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/870583344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:04:38 np0005548915 nova_compute[254819]: 2025-12-06 10:04:38.964 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:04:38 np0005548915 nova_compute[254819]: 2025-12-06 10:04:38.970 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:04:38 np0005548915 nova_compute[254819]: 2025-12-06 10:04:38.987 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:04:38 np0005548915 nova_compute[254819]: 2025-12-06 10:04:38.989 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:04:38 np0005548915 nova_compute[254819]: 2025-12-06 10:04:38.989 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:04:39 np0005548915 nova_compute[254819]: 2025-12-06 10:04:39.983 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:04:40 np0005548915 nova_compute[254819]: 2025-12-06 10:04:40.007 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:04:40 np0005548915 nova_compute[254819]: 2025-12-06 10:04:40.007 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:04:40 np0005548915 nova_compute[254819]: 2025-12-06 10:04:40.007 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:04:40 np0005548915 nova_compute[254819]: 2025-12-06 10:04:40.035 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 05:04:40 np0005548915 nova_compute[254819]: 2025-12-06 10:04:40.035 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:04:40 np0005548915 nova_compute[254819]: 2025-12-06 10:04:40.035 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:04:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v695: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:40 np0005548915 nova_compute[254819]: 2025-12-06 10:04:40.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:04:40 np0005548915 nova_compute[254819]: 2025-12-06 10:04:40.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:04:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:40.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:40.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:40] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 05:04:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:40] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 05:04:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v696: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:04:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  6 05:04:42 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 05:04:42 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 05:04:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:04:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:42.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:42.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Dec  6 05:04:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Dec  6 05:04:43 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Dec  6 05:04:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 05:04:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 05:04:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v698: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:04:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:04:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:44.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:44.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001bd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:45 np0005548915 podman[258699]: 2025-12-06 10:04:45.341724224 +0000 UTC m=+0.052374605 container create 5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:04:45 np0005548915 systemd[1]: Started libpod-conmon-5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0.scope.
Dec  6 05:04:45 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:04:45 np0005548915 podman[258699]: 2025-12-06 10:04:45.320563072 +0000 UTC m=+0.031213493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:04:45 np0005548915 podman[258699]: 2025-12-06 10:04:45.426352127 +0000 UTC m=+0.137002588 container init 5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_franklin, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:04:45 np0005548915 podman[258699]: 2025-12-06 10:04:45.435811933 +0000 UTC m=+0.146462304 container start 5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  6 05:04:45 np0005548915 podman[258699]: 2025-12-06 10:04:45.43904409 +0000 UTC m=+0.149694541 container attach 5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:04:45 np0005548915 eager_franklin[258715]: 167 167
Dec  6 05:04:45 np0005548915 systemd[1]: libpod-5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0.scope: Deactivated successfully.
Dec  6 05:04:45 np0005548915 podman[258699]: 2025-12-06 10:04:45.444742994 +0000 UTC m=+0.155393385 container died 5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_franklin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 05:04:45 np0005548915 systemd[1]: var-lib-containers-storage-overlay-3062be52938bac2a7a03f0815012980e16a102ecdf7b6af112a7cabe0ffebf43-merged.mount: Deactivated successfully.
Dec  6 05:04:45 np0005548915 podman[258699]: 2025-12-06 10:04:45.489296136 +0000 UTC m=+0.199946507 container remove 5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_franklin, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  6 05:04:45 np0005548915 systemd[1]: libpod-conmon-5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0.scope: Deactivated successfully.
Dec  6 05:04:45 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 05:04:45 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 05:04:45 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:04:45 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:45 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:45 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:04:45 np0005548915 podman[258739]: 2025-12-06 10:04:45.660615029 +0000 UTC m=+0.051697377 container create dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_rubin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:04:45 np0005548915 systemd[1]: Started libpod-conmon-dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7.scope.
Dec  6 05:04:45 np0005548915 podman[258739]: 2025-12-06 10:04:45.634528295 +0000 UTC m=+0.025610733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:04:45 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:04:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba2295cbc7c60b0eaf1ca5d69eed4eec04d8d0f25fd96897d9360bdb28dd364/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:04:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba2295cbc7c60b0eaf1ca5d69eed4eec04d8d0f25fd96897d9360bdb28dd364/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:04:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba2295cbc7c60b0eaf1ca5d69eed4eec04d8d0f25fd96897d9360bdb28dd364/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:04:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba2295cbc7c60b0eaf1ca5d69eed4eec04d8d0f25fd96897d9360bdb28dd364/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:04:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba2295cbc7c60b0eaf1ca5d69eed4eec04d8d0f25fd96897d9360bdb28dd364/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:04:45 np0005548915 podman[258739]: 2025-12-06 10:04:45.748944782 +0000 UTC m=+0.140027150 container init dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_rubin, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  6 05:04:45 np0005548915 podman[258739]: 2025-12-06 10:04:45.758527721 +0000 UTC m=+0.149610069 container start dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 05:04:45 np0005548915 podman[258739]: 2025-12-06 10:04:45.762322003 +0000 UTC m=+0.153404411 container attach dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_rubin, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:04:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Dec  6 05:04:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Dec  6 05:04:45 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Dec  6 05:04:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  6 05:04:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3880271287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 05:04:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  6 05:04:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3880271287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 05:04:46 np0005548915 friendly_rubin[258756]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:04:46 np0005548915 friendly_rubin[258756]: --> All data devices are unavailable
Dec  6 05:04:46 np0005548915 systemd[1]: libpod-dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7.scope: Deactivated successfully.
Dec  6 05:04:46 np0005548915 podman[258739]: 2025-12-06 10:04:46.09834964 +0000 UTC m=+0.489432028 container died dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_rubin, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:04:46 np0005548915 systemd[1]: var-lib-containers-storage-overlay-eba2295cbc7c60b0eaf1ca5d69eed4eec04d8d0f25fd96897d9360bdb28dd364-merged.mount: Deactivated successfully.
Dec  6 05:04:46 np0005548915 podman[258739]: 2025-12-06 10:04:46.146344945 +0000 UTC m=+0.537427313 container remove dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_rubin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  6 05:04:46 np0005548915 systemd[1]: libpod-conmon-dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7.scope: Deactivated successfully.
Dec  6 05:04:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v701: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail
Dec  6 05:04:46 np0005548915 podman[258876]: 2025-12-06 10:04:46.78614692 +0000 UTC m=+0.042853168 container create d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:04:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:46.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:04:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:46.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:04:46 np0005548915 systemd[1]: Started libpod-conmon-d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81.scope.
Dec  6 05:04:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003b70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:46 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:04:46 np0005548915 podman[258876]: 2025-12-06 10:04:46.766844239 +0000 UTC m=+0.023550517 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:04:47 np0005548915 podman[258876]: 2025-12-06 10:04:47.136675519 +0000 UTC m=+0.393381797 container init d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:04:47 np0005548915 podman[258876]: 2025-12-06 10:04:47.146978896 +0000 UTC m=+0.403685154 container start d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:04:47 np0005548915 podman[258876]: 2025-12-06 10:04:47.150526793 +0000 UTC m=+0.407233061 container attach d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 05:04:47 np0005548915 amazing_satoshi[258892]: 167 167
Dec  6 05:04:47 np0005548915 systemd[1]: libpod-d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81.scope: Deactivated successfully.
Dec  6 05:04:47 np0005548915 podman[258876]: 2025-12-06 10:04:47.154657894 +0000 UTC m=+0.411364172 container died d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec  6 05:04:47 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6667462582bc4ce1e1e541ee0d4b14f983ea6d061537e775481d3f8ed3d45c73-merged.mount: Deactivated successfully.
Dec  6 05:04:47 np0005548915 podman[258876]: 2025-12-06 10:04:47.198777915 +0000 UTC m=+0.455484163 container remove d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 05:04:47 np0005548915 systemd[1]: libpod-conmon-d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81.scope: Deactivated successfully.
Dec  6 05:04:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:47.247Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:04:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:47.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:04:47 np0005548915 podman[258918]: 2025-12-06 10:04:47.360768945 +0000 UTC m=+0.038437848 container create f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  6 05:04:47 np0005548915 systemd[1]: Started libpod-conmon-f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af.scope.
Dec  6 05:04:47 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:04:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04d17e4ce926a331d9fef412fe02b965b9f37858282b8e8c204fc52a526abdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:04:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04d17e4ce926a331d9fef412fe02b965b9f37858282b8e8c204fc52a526abdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:04:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04d17e4ce926a331d9fef412fe02b965b9f37858282b8e8c204fc52a526abdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:04:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04d17e4ce926a331d9fef412fe02b965b9f37858282b8e8c204fc52a526abdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:04:47 np0005548915 podman[258918]: 2025-12-06 10:04:47.427926716 +0000 UTC m=+0.105595629 container init f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Dec  6 05:04:47 np0005548915 podman[258918]: 2025-12-06 10:04:47.433980009 +0000 UTC m=+0.111648932 container start f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:04:47 np0005548915 podman[258918]: 2025-12-06 10:04:47.438081499 +0000 UTC m=+0.115750402 container attach f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 05:04:47 np0005548915 podman[258918]: 2025-12-06 10:04:47.345054711 +0000 UTC m=+0.022723614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]: {
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:    "1": [
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:        {
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:            "devices": [
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:                "/dev/loop3"
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:            ],
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:            "lv_name": "ceph_lv0",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:            "lv_size": "21470642176",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:            "name": "ceph_lv0",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:            "tags": {
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:                "ceph.cluster_name": "ceph",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:                "ceph.crush_device_class": "",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:                "ceph.encrypted": "0",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:                "ceph.osd_id": "1",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:                "ceph.type": "block",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:                "ceph.vdo": "0",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:                "ceph.with_tpm": "0"
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:            },
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:            "type": "block",
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:            "vg_name": "ceph_vg0"
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:        }
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]:    ]
Dec  6 05:04:47 np0005548915 stupefied_morse[258935]: }
Dec  6 05:04:47 np0005548915 systemd[1]: libpod-f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af.scope: Deactivated successfully.
Dec  6 05:04:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:04:47 np0005548915 podman[258945]: 2025-12-06 10:04:47.746172998 +0000 UTC m=+0.024206083 container died f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec  6 05:04:47 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c04d17e4ce926a331d9fef412fe02b965b9f37858282b8e8c204fc52a526abdd-merged.mount: Deactivated successfully.
Dec  6 05:04:47 np0005548915 podman[258945]: 2025-12-06 10:04:47.795609581 +0000 UTC m=+0.073642656 container remove f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:04:47 np0005548915 systemd[1]: libpod-conmon-f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af.scope: Deactivated successfully.
Dec  6 05:04:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Dec  6 05:04:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Dec  6 05:04:48 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Dec  6 05:04:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001bd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:48 np0005548915 podman[259052]: 2025-12-06 10:04:48.389035865 +0000 UTC m=+0.042142068 container create 4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:04:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v703: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 8.3 MiB/s wr, 68 op/s
Dec  6 05:04:48 np0005548915 systemd[1]: Started libpod-conmon-4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535.scope.
Dec  6 05:04:48 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:04:48 np0005548915 podman[259052]: 2025-12-06 10:04:48.372640073 +0000 UTC m=+0.025746296 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:04:48 np0005548915 podman[259052]: 2025-12-06 10:04:48.469979538 +0000 UTC m=+0.123085751 container init 4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  6 05:04:48 np0005548915 podman[259052]: 2025-12-06 10:04:48.476491773 +0000 UTC m=+0.129597976 container start 4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:04:48 np0005548915 podman[259052]: 2025-12-06 10:04:48.48081837 +0000 UTC m=+0.133924593 container attach 4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Dec  6 05:04:48 np0005548915 confident_hertz[259069]: 167 167
Dec  6 05:04:48 np0005548915 systemd[1]: libpod-4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535.scope: Deactivated successfully.
Dec  6 05:04:48 np0005548915 podman[259052]: 2025-12-06 10:04:48.483566944 +0000 UTC m=+0.136673187 container died 4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:04:48 np0005548915 systemd[1]: var-lib-containers-storage-overlay-903912ea43a427dd923d88a56fa2a0ed5e1af5a61a7615237329fd76357dbcb9-merged.mount: Deactivated successfully.
Dec  6 05:04:48 np0005548915 podman[259052]: 2025-12-06 10:04:48.531154228 +0000 UTC m=+0.184260441 container remove 4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:04:48 np0005548915 systemd[1]: libpod-conmon-4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535.scope: Deactivated successfully.
Dec  6 05:04:48 np0005548915 podman[259093]: 2025-12-06 10:04:48.705322935 +0000 UTC m=+0.055305673 container create 0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  6 05:04:48 np0005548915 systemd[1]: Started libpod-conmon-0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9.scope.
Dec  6 05:04:48 np0005548915 podman[259093]: 2025-12-06 10:04:48.680917646 +0000 UTC m=+0.030900404 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:04:48 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:04:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/437206563ac1df4174920cbe452782b112617d3d5963c0f0bfd7aee5df96eee2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:04:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/437206563ac1df4174920cbe452782b112617d3d5963c0f0bfd7aee5df96eee2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:04:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/437206563ac1df4174920cbe452782b112617d3d5963c0f0bfd7aee5df96eee2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:04:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/437206563ac1df4174920cbe452782b112617d3d5963c0f0bfd7aee5df96eee2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:04:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:04:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:48.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:04:48 np0005548915 podman[259093]: 2025-12-06 10:04:48.807363356 +0000 UTC m=+0.157346084 container init 0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_varahamihira, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:04:48 np0005548915 podman[259093]: 2025-12-06 10:04:48.818138937 +0000 UTC m=+0.168121665 container start 0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 05:04:48 np0005548915 podman[259093]: 2025-12-06 10:04:48.822551396 +0000 UTC m=+0.172534194 container attach 0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Dec  6 05:04:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:48.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:49 np0005548915 lvm[259186]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:04:49 np0005548915 lvm[259186]: VG ceph_vg0 finished
Dec  6 05:04:49 np0005548915 upbeat_varahamihira[259110]: {}
Dec  6 05:04:49 np0005548915 systemd[1]: libpod-0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9.scope: Deactivated successfully.
Dec  6 05:04:49 np0005548915 systemd[1]: libpod-0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9.scope: Consumed 1.209s CPU time.
Dec  6 05:04:49 np0005548915 podman[259093]: 2025-12-06 10:04:49.602058868 +0000 UTC m=+0.952041576 container died 0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  6 05:04:49 np0005548915 systemd[1]: var-lib-containers-storage-overlay-437206563ac1df4174920cbe452782b112617d3d5963c0f0bfd7aee5df96eee2-merged.mount: Deactivated successfully.
Dec  6 05:04:49 np0005548915 podman[259093]: 2025-12-06 10:04:49.647103802 +0000 UTC m=+0.997086510 container remove 0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:04:49 np0005548915 systemd[1]: libpod-conmon-0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9.scope: Deactivated successfully.
Dec  6 05:04:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:04:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:04:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003b70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001d70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v704: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.8 MiB/s wr, 56 op/s
Dec  6 05:04:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:04:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:50.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:04:50 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:50 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:04:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:04:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:50.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:04:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:50] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 05:04:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:50] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  6 05:04:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003b70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v705: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.2 MiB/s wr, 42 op/s
Dec  6 05:04:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:04:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Dec  6 05:04:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Dec  6 05:04:52 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Dec  6 05:04:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:52.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:52.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001d70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:04:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:04:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:04:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:04:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:04:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:04:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:04:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:04:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:04:54.237 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:04:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:04:54.237 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:04:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:04:54.237 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:04:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v707: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 48 op/s
Dec  6 05:04:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:54.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:54.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001d70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v708: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 621 B/s wr, 5 op/s
Dec  6 05:04:56 np0005548915 podman[259257]: 2025-12-06 10:04:56.430306351 +0000 UTC m=+0.060927734 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  6 05:04:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:56.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:56.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:57.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:04:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:04:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001d70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:04:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v709: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 4 op/s
Dec  6 05:04:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:58.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:04:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:04:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:58.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:04:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v710: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 4 op/s
Dec  6 05:05:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:00.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:05:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:00.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:05:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001d90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:00] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Dec  6 05:05:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:00] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Dec  6 05:05:01 np0005548915 podman[259283]: 2025-12-06 10:05:01.469428236 +0000 UTC m=+0.094102329 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 05:05:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v711: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 4 op/s
Dec  6 05:05:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:05:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:02.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:05:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:02.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:05:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v712: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 438 B/s wr, 4 op/s
Dec  6 05:05:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:04.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy ignored for local
Dec  6 05:05:04 np0005548915 kernel: ganesha.nfsd[258355]: segfault at 50 ip 00007f4803bbf32e sp 00007f47d0ff8210 error 4 in libntirpc.so.5.8[7f4803ba4000+2c000] likely on CPU 5 (core 0, socket 5)
Dec  6 05:05:04 np0005548915 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  6 05:05:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:04.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:04 np0005548915 systemd[1]: Started Process Core Dump (PID 259312/UID 0).
Dec  6 05:05:06 np0005548915 systemd-coredump[259313]: Process 213778 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 86:#012#0  0x00007f4803bbf32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  6 05:05:06 np0005548915 systemd[1]: systemd-coredump@6-259312-0.service: Deactivated successfully.
Dec  6 05:05:06 np0005548915 systemd[1]: systemd-coredump@6-259312-0.service: Consumed 1.248s CPU time.
Dec  6 05:05:06 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 05:05:06 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 05:05:06 np0005548915 podman[259322]: 2025-12-06 10:05:06.239862406 +0000 UTC m=+0.027054961 container died 5d860964edcc2ae02d2071e13089b9e2f2642e3853757c3cef05b9c593c1e765 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  6 05:05:06 np0005548915 systemd[1]: var-lib-containers-storage-overlay-38bb679519899423a10fd5aec53519d66c5cf90e4dcb5edc1f193a3cb3ab5273-merged.mount: Deactivated successfully.
Dec  6 05:05:06 np0005548915 podman[259322]: 2025-12-06 10:05:06.285176318 +0000 UTC m=+0.072368853 container remove 5d860964edcc2ae02d2071e13089b9e2f2642e3853757c3cef05b9c593c1e765 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:05:06 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec  6 05:05:06 np0005548915 podman[259320]: 2025-12-06 10:05:06.303380619 +0000 UTC m=+0.084486580 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 05:05:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v713: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:05:06 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec  6 05:05:06 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.549s CPU time.
Dec  6 05:05:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:05:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:06.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:05:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:06.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:07.250Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:05:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:07.251Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:05:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:07.251Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:05:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:05:08 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:08.037 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:05:08 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:08.038 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 05:05:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v714: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:05:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:05:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:08.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:05:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:05:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:08.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:05:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:05:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:05:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v715: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:05:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:10.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100510 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:05:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:10.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:10] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Dec  6 05:05:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:10] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Dec  6 05:05:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v716: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:05:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:05:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:12.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:12.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v717: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:05:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:14.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:05:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:14.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:05:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:16.040 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:05:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v718: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:05:16 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 7.
Dec  6 05:05:16 np0005548915 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 05:05:16 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.549s CPU time.
Dec  6 05:05:16 np0005548915 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 05:05:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100516 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:05:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:16.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:16 np0005548915 podman[259461]: 2025-12-06 10:05:16.860622247 +0000 UTC m=+0.055980031 container create cb12feac15a0669dd612ec520b2008fd4691d61a8859fee5c73829837afae350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Dec  6 05:05:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:16.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:16 np0005548915 podman[259461]: 2025-12-06 10:05:16.83331576 +0000 UTC m=+0.028673584 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:05:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33760f1e5dff0f58c4ebac2793030140ffc34f481b06aa408ec990465208878b/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33760f1e5dff0f58c4ebac2793030140ffc34f481b06aa408ec990465208878b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33760f1e5dff0f58c4ebac2793030140ffc34f481b06aa408ec990465208878b/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33760f1e5dff0f58c4ebac2793030140ffc34f481b06aa408ec990465208878b/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:16 np0005548915 podman[259461]: 2025-12-06 10:05:16.958152767 +0000 UTC m=+0.153510531 container init cb12feac15a0669dd612ec520b2008fd4691d61a8859fee5c73829837afae350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  6 05:05:16 np0005548915 podman[259461]: 2025-12-06 10:05:16.963797679 +0000 UTC m=+0.159155423 container start cb12feac15a0669dd612ec520b2008fd4691d61a8859fee5c73829837afae350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  6 05:05:16 np0005548915 bash[259461]: cb12feac15a0669dd612ec520b2008fd4691d61a8859fee5c73829837afae350
Dec  6 05:05:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:16 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  6 05:05:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:16 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  6 05:05:16 np0005548915 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 05:05:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:17 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  6 05:05:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:17 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  6 05:05:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:17 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  6 05:05:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:17 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  6 05:05:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:17 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  6 05:05:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:17 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 05:05:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:17.252Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:05:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:05:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v719: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:05:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:18.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:05:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:18.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:05:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v720: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:05:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:05:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:20.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:05:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:05:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:20.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:05:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:20] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Dec  6 05:05:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:20] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Dec  6 05:05:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v721: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:05:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:05:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:05:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:22.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:05:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:05:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:22.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:05:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:23 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:05:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:23 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:05:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:05:23
Dec  6 05:05:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:05:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:05:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'vms', '.rgw.root', 'backups', '.nfs', 'cephfs.cephfs.meta', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control']
Dec  6 05:05:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:05:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:05:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:05:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:05:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v722: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:05:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:05:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:24.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:24.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v723: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Dec  6 05:05:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:26.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:05:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:26.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:05:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:27.253Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:05:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:27.253Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:05:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:27.253Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:05:27 np0005548915 podman[259532]: 2025-12-06 10:05:27.429117218 +0000 UTC m=+0.060693918 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 05:05:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:05:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v724: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 05:05:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:05:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:28.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:05:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:28.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 05:05:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 05:05:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:30 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:30 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v725: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 05:05:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:05:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:30.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:05:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:30 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:05:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:30.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:05:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:30] "GET /metrics HTTP/1.1" 200 48321 "" "Prometheus/2.51.0"
Dec  6 05:05:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:30] "GET /metrics HTTP/1.1" 200 48321 "" "Prometheus/2.51.0"
Dec  6 05:05:32 np0005548915 nova_compute[254819]: 2025-12-06 10:05:32.218 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:05:32 np0005548915 nova_compute[254819]: 2025-12-06 10:05:32.219 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:05:32 np0005548915 nova_compute[254819]: 2025-12-06 10:05:32.239 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 05:05:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:32 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5a8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:32 np0005548915 nova_compute[254819]: 2025-12-06 10:05:32.357 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:05:32 np0005548915 nova_compute[254819]: 2025-12-06 10:05:32.357 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:05:32 np0005548915 nova_compute[254819]: 2025-12-06 10:05:32.366 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 05:05:32 np0005548915 nova_compute[254819]: 2025-12-06 10:05:32.366 254824 INFO nova.compute.claims [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  6 05:05:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:32 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b4000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v726: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  6 05:05:32 np0005548915 podman[259599]: 2025-12-06 10:05:32.448158431 +0000 UTC m=+0.074791538 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller)
Dec  6 05:05:32 np0005548915 nova_compute[254819]: 2025-12-06 10:05:32.479 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:05:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:32 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:05:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:32 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:05:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:05:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:32.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100532 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 05:05:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:32 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c0001680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:05:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2348345609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:05:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:05:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:32.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:05:32 np0005548915 nova_compute[254819]: 2025-12-06 10:05:32.915 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:05:32 np0005548915 nova_compute[254819]: 2025-12-06 10:05:32.924 254824 DEBUG nova.compute.provider_tree [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:05:32 np0005548915 nova_compute[254819]: 2025-12-06 10:05:32.947 254824 DEBUG nova.scheduler.client.report [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:05:32 np0005548915 nova_compute[254819]: 2025-12-06 10:05:32.971 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:05:32 np0005548915 nova_compute[254819]: 2025-12-06 10:05:32.972 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.027 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.028 254824 DEBUG nova.network.neutron [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.073 254824 INFO nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.122 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.235 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.238 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.239 254824 INFO nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Creating image(s)#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.281 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.328 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.364 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.368 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.369 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.606 254824 DEBUG nova.virt.libvirt.imagebackend [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image locations are: [{'url': 'rbd://5ecd3f74-dade-5fc4-92ce-8950ae424258/images/9489b8a5-a798-4e26-87f9-59bb1eb2e6fd/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://5ecd3f74-dade-5fc4-92ce-8950ae424258/images/9489b8a5-a798-4e26-87f9-59bb1eb2e6fd/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.818 254824 WARNING oslo_policy.policy [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.818 254824 WARNING oslo_policy.policy [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  6 05:05:33 np0005548915 nova_compute[254819]: 2025-12-06 10:05:33.820 254824 DEBUG nova.policy [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 05:05:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:34 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:34 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5a80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v727: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.6 KiB/s wr, 5 op/s
Dec  6 05:05:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:34.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:34 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:05:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:34.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.031 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.050 254824 DEBUG nova.network.neutron [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Successfully created port: d4daf2d1-1774-4e84-b69b-60ba95ce1518 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.087 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.part --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.089 254824 DEBUG nova.virt.images [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] 9489b8a5-a798-4e26-87f9-59bb1eb2e6fd was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.091 254824 DEBUG nova.privsep.utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.092 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.part /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.294 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.part /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.converted" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.299 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:05:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:35 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.369 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.converted --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.371 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.399 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.404 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:05:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Dec  6 05:05:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Dec  6 05:05:35 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.752 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.775 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.777 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.778 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 05:05:35 np0005548915 nova_compute[254819]: 2025-12-06 10:05:35.796 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:05:36 np0005548915 nova_compute[254819]: 2025-12-06 10:05:36.186 254824 DEBUG nova.network.neutron [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Successfully updated port: d4daf2d1-1774-4e84-b69b-60ba95ce1518 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 05:05:36 np0005548915 nova_compute[254819]: 2025-12-06 10:05:36.209 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:05:36 np0005548915 nova_compute[254819]: 2025-12-06 10:05:36.210 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:05:36 np0005548915 nova_compute[254819]: 2025-12-06 10:05:36.210 254824 DEBUG nova.network.neutron [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 05:05:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:36 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c0001680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:36 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v729: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.4 KiB/s wr, 5 op/s
Dec  6 05:05:36 np0005548915 nova_compute[254819]: 2025-12-06 10:05:36.416 254824 DEBUG nova.network.neutron [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 05:05:36 np0005548915 podman[259753]: 2025-12-06 10:05:36.461137075 +0000 UTC m=+0.082547027 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 05:05:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Dec  6 05:05:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Dec  6 05:05:36 np0005548915 nova_compute[254819]: 2025-12-06 10:05:36.700 254824 DEBUG nova.compute.manager [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-changed-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:05:36 np0005548915 nova_compute[254819]: 2025-12-06 10:05:36.700 254824 DEBUG nova.compute.manager [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Refreshing instance network info cache due to event network-changed-d4daf2d1-1774-4e84-b69b-60ba95ce1518. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:05:36 np0005548915 nova_compute[254819]: 2025-12-06 10:05:36.700 254824 DEBUG oslo_concurrency.lockutils [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:05:36 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Dec  6 05:05:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:36.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:36 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5a80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:05:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:36.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:05:36 np0005548915 nova_compute[254819]: 2025-12-06 10:05:36.917 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:05:36 np0005548915 nova_compute[254819]: 2025-12-06 10:05:36.994 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.085 254824 DEBUG nova.objects.instance [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 9f4c3de7-de9e-45d5-b170-3469a0bd0959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.108 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.108 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Ensure instance console log exists: /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.110 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.110 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.110 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.157 254824 DEBUG nova.network.neutron [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updating instance_info_cache with network_info: [{"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.177 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.177 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Instance network_info: |[{"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.178 254824 DEBUG oslo_concurrency.lockutils [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.178 254824 DEBUG nova.network.neutron [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Refreshing network info cache for port d4daf2d1-1774-4e84-b69b-60ba95ce1518 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.181 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Start _get_guest_xml network_info=[{"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.186 254824 WARNING nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.191 254824 DEBUG nova.virt.libvirt.host [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.192 254824 DEBUG nova.virt.libvirt.host [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.198 254824 DEBUG nova.virt.libvirt.host [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.199 254824 DEBUG nova.virt.libvirt.host [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.199 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.200 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.200 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.200 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.200 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.201 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.201 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.201 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.201 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.201 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.201 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.201 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.205 254824 DEBUG nova.privsep.utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.205 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:05:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:37.254Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:05:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:05:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2654563727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.659 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.686 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.691 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:05:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.808 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.853 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.853 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.854 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.854 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:05:37 np0005548915 nova_compute[254819]: 2025-12-06 10:05:37.854 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:05:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:05:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/82878470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.149 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.151 254824 DEBUG nova.virt.libvirt.vif [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1430712907',display_name='tempest-TestNetworkBasicOps-server-1430712907',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1430712907',id=1,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCAfMPOvgHaRlqGgLXkto0FcIKRTuQseDyB3UM7MdJ4qc4V82jaOJG1wyoIF6xrRvoJcXVE+RFVPueMCiHrP5rYBgCoIkNmahi09ifuS6NMzBYr/VB4Uf4Lhhp6Gu2WU0Q==',key_name='tempest-TestNetworkBasicOps-1259992561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-m1904u1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:05:33Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=9f4c3de7-de9e-45d5-b170-3469a0bd0959,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.151 254824 DEBUG nova.network.os_vif_util [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.152 254824 DEBUG nova.network.os_vif_util [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:32:83,bridge_name='br-int',has_traffic_filtering=True,id=d4daf2d1-1774-4e84-b69b-60ba95ce1518,network=Network(971faad6-f548-4a54-bc9c-3aa3cca72c6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4daf2d1-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.154 254824 DEBUG nova.objects.instance [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f4c3de7-de9e-45d5-b170-3469a0bd0959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.176 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] End _get_guest_xml xml=<domain type="kvm">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  <uuid>9f4c3de7-de9e-45d5-b170-3469a0bd0959</uuid>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  <name>instance-00000001</name>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  <memory>131072</memory>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  <vcpu>1</vcpu>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <nova:name>tempest-TestNetworkBasicOps-server-1430712907</nova:name>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <nova:creationTime>2025-12-06 10:05:37</nova:creationTime>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <nova:flavor name="m1.nano">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <nova:memory>128</nova:memory>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <nova:disk>1</nova:disk>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <nova:swap>0</nova:swap>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <nova:vcpus>1</nova:vcpus>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      </nova:flavor>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <nova:owner>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      </nova:owner>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <nova:ports>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <nova:port uuid="d4daf2d1-1774-4e84-b69b-60ba95ce1518">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        </nova:port>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      </nova:ports>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    </nova:instance>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  <sysinfo type="smbios">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <entry name="manufacturer">RDO</entry>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <entry name="product">OpenStack Compute</entry>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <entry name="serial">9f4c3de7-de9e-45d5-b170-3469a0bd0959</entry>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <entry name="uuid">9f4c3de7-de9e-45d5-b170-3469a0bd0959</entry>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <entry name="family">Virtual Machine</entry>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <boot dev="hd"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <smbios mode="sysinfo"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <vmcoreinfo/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  <clock offset="utc">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <timer name="hpet" present="no"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  <cpu mode="host-model" match="exact">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <disk type="network" device="disk">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <target dev="vda" bus="virtio"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <disk type="network" device="cdrom">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk.config">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <target dev="sda" bus="sata"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <interface type="ethernet">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <mac address="fa:16:3e:a5:32:83"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <mtu size="1442"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <target dev="tapd4daf2d1-17"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <serial type="pty">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <log file="/var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/console.log" append="off"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <input type="tablet" bus="usb"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <rng model="virtio">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <backend model="random">/dev/urandom</backend>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <controller type="usb" index="0"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    <memballoon model="virtio">
Dec  6 05:05:38 np0005548915 nova_compute[254819]:      <stats period="10"/>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:05:38 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:05:38 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:05:38 np0005548915 nova_compute[254819]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.176 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Preparing to wait for external event network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.177 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.177 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.177 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.177 254824 DEBUG nova.virt.libvirt.vif [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1430712907',display_name='tempest-TestNetworkBasicOps-server-1430712907',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1430712907',id=1,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCAfMPOvgHaRlqGgLXkto0FcIKRTuQseDyB3UM7MdJ4qc4V82jaOJG1wyoIF6xrRvoJcXVE+RFVPueMCiHrP5rYBgCoIkNmahi09ifuS6NMzBYr/VB4Uf4Lhhp6Gu2WU0Q==',key_name='tempest-TestNetworkBasicOps-1259992561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-m1904u1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:05:33Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=9f4c3de7-de9e-45d5-b170-3469a0bd0959,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.178 254824 DEBUG nova.network.os_vif_util [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.178 254824 DEBUG nova.network.os_vif_util [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:32:83,bridge_name='br-int',has_traffic_filtering=True,id=d4daf2d1-1774-4e84-b69b-60ba95ce1518,network=Network(971faad6-f548-4a54-bc9c-3aa3cca72c6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4daf2d1-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.178 254824 DEBUG os_vif [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:32:83,bridge_name='br-int',has_traffic_filtering=True,id=d4daf2d1-1774-4e84-b69b-60ba95ce1518,network=Network(971faad6-f548-4a54-bc9c-3aa3cca72c6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4daf2d1-17') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.210 254824 DEBUG ovsdbapp.backend.ovs_idl [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.211 254824 DEBUG ovsdbapp.backend.ovs_idl [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.211 254824 DEBUG ovsdbapp.backend.ovs_idl [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.211 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.212 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.212 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.212 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.214 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.216 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.224 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.225 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.225 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.226 254824 INFO oslo.privsep.daemon [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmph1_9zsm8/privsep.sock']#033[00m
Dec  6 05:05:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:38 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:05:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/551840607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.304 254824 DEBUG nova.network.neutron [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updated VIF entry in instance network info cache for port d4daf2d1-1774-4e84-b69b-60ba95ce1518. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.305 254824 DEBUG nova.network.neutron [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updating instance_info_cache with network_info: [{"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.314 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.323 254824 DEBUG oslo_concurrency.lockutils [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:05:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:38 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c0001680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v731: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 54 op/s
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.486 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.487 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4802MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.487 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.488 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.599 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 9f4c3de7-de9e-45d5-b170-3469a0bd0959 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.599 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.600 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.663 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing inventories for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.696 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating ProviderTree inventory for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.699 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating inventory in ProviderTree for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 05:05:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100538 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.727 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing aggregate associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.750 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing trait associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.797 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:05:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:38.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:38 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:05:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:38.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.904 254824 INFO oslo.privsep.daemon [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.748 259938 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.755 259938 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.759 259938 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Dec  6 05:05:38 np0005548915 nova_compute[254819]: 2025-12-06 10:05:38.760 259938 INFO oslo.privsep.daemon [-] privsep daemon running as pid 259938#033[00m
Dec  6 05:05:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:05:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.221 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.223 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4daf2d1-17, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.224 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4daf2d1-17, col_values=(('external_ids', {'iface-id': 'd4daf2d1-1774-4e84-b69b-60ba95ce1518', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:32:83', 'vm-uuid': '9f4c3de7-de9e-45d5-b170-3469a0bd0959'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.228 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:39 np0005548915 NetworkManager[48882]: <info>  [1765015539.2304] manager: (tapd4daf2d1-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.235 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.240 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.243 254824 INFO os_vif [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:32:83,bridge_name='br-int',has_traffic_filtering=True,id=d4daf2d1-1774-4e84-b69b-60ba95ce1518,network=Network(971faad6-f548-4a54-bc9c-3aa3cca72c6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4daf2d1-17')#033[00m
Dec  6 05:05:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:05:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2829121739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.282 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.287 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating inventory in ProviderTree for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.343 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.344 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.344 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:a5:32:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.344 254824 INFO nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Using config drive#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.374 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.381 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updated inventory for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.381 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.382 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating inventory in ProviderTree for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.403 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:05:39 np0005548915 nova_compute[254819]: 2025-12-06 10:05:39.403 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.213 254824 INFO nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Creating config drive at /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/disk.config#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.217 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphgc54fy_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:05:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:40 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5a80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.305 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.343 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.344 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.344 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.346 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphgc54fy_" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.374 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.378 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/disk.config 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:05:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:40 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.399 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.400 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.400 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.400 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.401 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.401 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.401 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.401 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:05:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v732: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 54 op/s
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.545 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/disk.config 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.546 254824 INFO nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Deleting local config drive /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/disk.config because it was imported into RBD.#033[00m
Dec  6 05:05:40 np0005548915 systemd[1]: Starting libvirt secret daemon...
Dec  6 05:05:40 np0005548915 systemd[1]: Started libvirt secret daemon.
Dec  6 05:05:40 np0005548915 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec  6 05:05:40 np0005548915 kernel: tapd4daf2d1-17: entered promiscuous mode
Dec  6 05:05:40 np0005548915 NetworkManager[48882]: <info>  [1765015540.7111] manager: (tapd4daf2d1-17): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Dec  6 05:05:40 np0005548915 ovn_controller[152417]: 2025-12-06T10:05:40Z|00027|binding|INFO|Claiming lport d4daf2d1-1774-4e84-b69b-60ba95ce1518 for this chassis.
Dec  6 05:05:40 np0005548915 ovn_controller[152417]: 2025-12-06T10:05:40Z|00028|binding|INFO|d4daf2d1-1774-4e84-b69b-60ba95ce1518: Claiming fa:16:3e:a5:32:83 10.100.0.14
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.732 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.735 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:05:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:40.755 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:32:83 10.100.0.14'], port_security=['fa:16:3e:a5:32:83 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9f4c3de7-de9e-45d5-b170-3469a0bd0959', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7c9b5ec-d7a8-44ba-8a79-a0a05df423dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83e40234-7108-4b28-a3a7-b2ef4fad45ac, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=d4daf2d1-1774-4e84-b69b-60ba95ce1518) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:05:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:40.757 162267 INFO neutron.agent.ovn.metadata.agent [-] Port d4daf2d1-1774-4e84-b69b-60ba95ce1518 in datapath 971faad6-f548-4a54-bc9c-3aa3cca72c6f bound to our chassis#033[00m
Dec  6 05:05:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:40.760 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 971faad6-f548-4a54-bc9c-3aa3cca72c6f#033[00m
Dec  6 05:05:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:40.763 162267 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmppprpmqyr/privsep.sock']#033[00m
Dec  6 05:05:40 np0005548915 systemd-udevd[260064]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:05:40 np0005548915 NetworkManager[48882]: <info>  [1765015540.7909] device (tapd4daf2d1-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 05:05:40 np0005548915 NetworkManager[48882]: <info>  [1765015540.7917] device (tapd4daf2d1-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 05:05:40 np0005548915 systemd-machined[216202]: New machine qemu-1-instance-00000001.
Dec  6 05:05:40 np0005548915 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.828 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:40 np0005548915 ovn_controller[152417]: 2025-12-06T10:05:40Z|00029|binding|INFO|Setting lport d4daf2d1-1774-4e84-b69b-60ba95ce1518 ovn-installed in OVS
Dec  6 05:05:40 np0005548915 ovn_controller[152417]: 2025-12-06T10:05:40Z|00030|binding|INFO|Setting lport d4daf2d1-1774-4e84-b69b-60ba95ce1518 up in Southbound
Dec  6 05:05:40 np0005548915 nova_compute[254819]: 2025-12-06 10:05:40.836 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:40.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:40 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:40] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec  6 05:05:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:40] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec  6 05:05:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:05:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:40.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.364 254824 DEBUG nova.compute.manager [req-2e47052c-98c0-4483-8c48-8137237a8bcc req-72005371-0ee1-4553-89f9-8481d0b35e9b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.364 254824 DEBUG oslo_concurrency.lockutils [req-2e47052c-98c0-4483-8c48-8137237a8bcc req-72005371-0ee1-4553-89f9-8481d0b35e9b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.365 254824 DEBUG oslo_concurrency.lockutils [req-2e47052c-98c0-4483-8c48-8137237a8bcc req-72005371-0ee1-4553-89f9-8481d0b35e9b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.365 254824 DEBUG oslo_concurrency.lockutils [req-2e47052c-98c0-4483-8c48-8137237a8bcc req-72005371-0ee1-4553-89f9-8481d0b35e9b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.365 254824 DEBUG nova.compute.manager [req-2e47052c-98c0-4483-8c48-8137237a8bcc req-72005371-0ee1-4553-89f9-8481d0b35e9b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Processing event network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.405 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015541.4049911, 9f4c3de7-de9e-45d5-b170-3469a0bd0959 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.405 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] VM Started (Lifecycle Event)#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.407 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.428 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.432 254824 INFO nova.virt.libvirt.driver [-] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Instance spawned successfully.#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.432 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.451 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.457 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.461 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.462 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.462 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.462 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.463 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.463 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.490 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.491 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015541.4073138, 9f4c3de7-de9e-45d5-b170-3469a0bd0959 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.491 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] VM Paused (Lifecycle Event)#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.516 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.521 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015541.409797, 9f4c3de7-de9e-45d5-b170-3469a0bd0959 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.521 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] VM Resumed (Lifecycle Event)#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.543 254824 INFO nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Took 8.31 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.543 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.544 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.550 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:05:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:41.567 162267 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  6 05:05:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:41.568 162267 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmppprpmqyr/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  6 05:05:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:41.397 260126 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  6 05:05:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:41.402 260126 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  6 05:05:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:41.406 260126 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Dec  6 05:05:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:41.406 260126 INFO oslo.privsep.daemon [-] privsep daemon running as pid 260126#033[00m
Dec  6 05:05:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:41.571 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ca09dc-06a6-4b3d-9297-acd6d37daca0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.585 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.605 254824 INFO nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Took 9.29 seconds to build instance.#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.621 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:05:41 np0005548915 nova_compute[254819]: 2025-12-06 10:05:41.742 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:05:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.166 260126 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:05:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.167 260126 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:05:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.167 260126 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:05:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:42 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:42 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5a8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:05:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v733: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Dec  6 05:05:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:05:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Dec  6 05:05:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Dec  6 05:05:42 np0005548915 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Dec  6 05:05:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:42.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:42 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5a8002b10 fd 39 proxy ignored for local
Dec  6 05:05:42 np0005548915 kernel: ganesha.nfsd[259570]: segfault at 50 ip 00007fb67f92232e sp 00007fb6337fd210 error 4 in libntirpc.so.5.8[7fb67f907000+2c000] likely on CPU 3 (core 0, socket 3)
Dec  6 05:05:42 np0005548915 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  6 05:05:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.888 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[34f28924-f297-46ff-8459-15fb59753abf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.889 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap971faad6-f1 in ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 05:05:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.892 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap971faad6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 05:05:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.892 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[5a538acf-ab2b-4eb9-9818-a57661d4625e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.896 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[723eafe8-c11a-4257-9dde-6171b876a920]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:42.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:42 np0005548915 systemd[1]: Started Process Core Dump (PID 260134/UID 0).
Dec  6 05:05:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.928 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[f4aa2210-61e5-4e7e-bbe0-48d7814b60f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.962 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[cad93995-b1d0-4f03-9100-1badd7fdfe3f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.965 162267 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmplxt6y2rc/privsep.sock']#033[00m
Dec  6 05:05:43 np0005548915 nova_compute[254819]: 2025-12-06 10:05:43.443 254824 DEBUG nova.compute.manager [req-d5da9121-5fa8-4c66-b7f7-9f60e814632e req-18e06190-aab4-4977-9685-554cccbd7f57 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:05:43 np0005548915 nova_compute[254819]: 2025-12-06 10:05:43.443 254824 DEBUG oslo_concurrency.lockutils [req-d5da9121-5fa8-4c66-b7f7-9f60e814632e req-18e06190-aab4-4977-9685-554cccbd7f57 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:05:43 np0005548915 nova_compute[254819]: 2025-12-06 10:05:43.443 254824 DEBUG oslo_concurrency.lockutils [req-d5da9121-5fa8-4c66-b7f7-9f60e814632e req-18e06190-aab4-4977-9685-554cccbd7f57 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:05:43 np0005548915 nova_compute[254819]: 2025-12-06 10:05:43.443 254824 DEBUG oslo_concurrency.lockutils [req-d5da9121-5fa8-4c66-b7f7-9f60e814632e req-18e06190-aab4-4977-9685-554cccbd7f57 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:05:43 np0005548915 nova_compute[254819]: 2025-12-06 10:05:43.444 254824 DEBUG nova.compute.manager [req-d5da9121-5fa8-4c66-b7f7-9f60e814632e req-18e06190-aab4-4977-9685-554cccbd7f57 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] No waiting events found dispatching network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:05:43 np0005548915 nova_compute[254819]: 2025-12-06 10:05:43.444 254824 WARNING nova.compute.manager [req-d5da9121-5fa8-4c66-b7f7-9f60e814632e req-18e06190-aab4-4977-9685-554cccbd7f57 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received unexpected event network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 for instance with vm_state active and task_state None.#033[00m
Dec  6 05:05:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:43.697 162267 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  6 05:05:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:43.698 162267 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmplxt6y2rc/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  6 05:05:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:43.535 260145 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  6 05:05:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:43.540 260145 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  6 05:05:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:43.543 260145 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  6 05:05:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:43.544 260145 INFO oslo.privsep.daemon [-] privsep daemon running as pid 260145#033[00m
Dec  6 05:05:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:43.701 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[627293ea-b333-4c4d-ae91-a70579c39528]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:44 np0005548915 nova_compute[254819]: 2025-12-06 10:05:44.230 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:44.242 260145 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:05:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:44.242 260145 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:05:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:44.242 260145 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:05:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v735: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Dec  6 05:05:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:44.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:44.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:44.935 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e3c80a-2006-449b-86f2-b352e1168717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.107 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a9c466-123b-42d1-8b4c-094a8b804267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:45 np0005548915 NetworkManager[48882]: <info>  [1765015545.1087] manager: (tap971faad6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Dec  6 05:05:45 np0005548915 systemd-udevd[260157]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.159 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[cf008c7e-ad7b-41dc-99a5-ed0f5c8a0b3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.164 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca46fab-577e-49cb-bcaa-387455a89511]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:45 np0005548915 NetworkManager[48882]: <info>  [1765015545.2063] device (tap971faad6-f0): carrier: link connected
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.217 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf903e1-719a-4fae-9efb-f9686f4cb7ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.239 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[234e443e-8485-4373-a539-2717a19bdf81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap971faad6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:87:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 391569, 'reachable_time': 24502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260175, 'error': None, 'target': 'ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:45 np0005548915 NetworkManager[48882]: <info>  [1765015545.2457] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Dec  6 05:05:45 np0005548915 NetworkManager[48882]: <info>  [1765015545.2461] device (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 05:05:45 np0005548915 nova_compute[254819]: 2025-12-06 10:05:45.244 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:45 np0005548915 NetworkManager[48882]: <info>  [1765015545.2472] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Dec  6 05:05:45 np0005548915 NetworkManager[48882]: <info>  [1765015545.2475] device (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 05:05:45 np0005548915 NetworkManager[48882]: <info>  [1765015545.2483] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Dec  6 05:05:45 np0005548915 NetworkManager[48882]: <info>  [1765015545.2489] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec  6 05:05:45 np0005548915 NetworkManager[48882]: <info>  [1765015545.2494] device (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  6 05:05:45 np0005548915 NetworkManager[48882]: <info>  [1765015545.2499] device (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.262 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[790452b7-a657-4fbf-84dc-77fbbc046aed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:8710'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 391569, 'tstamp': 391569}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260177, 'error': None, 'target': 'ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:45 np0005548915 nova_compute[254819]: 2025-12-06 10:05:45.265 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:45 np0005548915 nova_compute[254819]: 2025-12-06 10:05:45.269 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.285 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[f37a421c-e222-4500-92ab-79ea49957054]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap971faad6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:87:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 391569, 'reachable_time': 24502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260179, 'error': None, 'target': 'ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:45 np0005548915 nova_compute[254819]: 2025-12-06 10:05:45.307 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.328 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6d206e-0745-43ee-a67c-b280e3d44c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.404 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[91b5a33b-4edc-49c8-98c9-86b19407af9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.406 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap971faad6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.407 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.408 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap971faad6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:05:45 np0005548915 nova_compute[254819]: 2025-12-06 10:05:45.409 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:45 np0005548915 kernel: tap971faad6-f0: entered promiscuous mode
Dec  6 05:05:45 np0005548915 NetworkManager[48882]: <info>  [1765015545.4105] manager: (tap971faad6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Dec  6 05:05:45 np0005548915 nova_compute[254819]: 2025-12-06 10:05:45.827 254824 DEBUG nova.compute.manager [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-changed-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:05:45 np0005548915 nova_compute[254819]: 2025-12-06 10:05:45.828 254824 DEBUG nova.compute.manager [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Refreshing instance network info cache due to event network-changed-d4daf2d1-1774-4e84-b69b-60ba95ce1518. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.828 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap971faad6-f0, col_values=(('external_ids', {'iface-id': '5fb89a54-8c63-4d33-bca3-d7130382f3f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:05:45 np0005548915 nova_compute[254819]: 2025-12-06 10:05:45.829 254824 DEBUG oslo_concurrency.lockutils [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:05:45 np0005548915 nova_compute[254819]: 2025-12-06 10:05:45.829 254824 DEBUG oslo_concurrency.lockutils [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:05:45 np0005548915 nova_compute[254819]: 2025-12-06 10:05:45.830 254824 DEBUG nova.network.neutron [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Refreshing network info cache for port d4daf2d1-1774-4e84-b69b-60ba95ce1518 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:05:45 np0005548915 ovn_controller[152417]: 2025-12-06T10:05:45Z|00031|binding|INFO|Releasing lport 5fb89a54-8c63-4d33-bca3-d7130382f3f8 from this chassis (sb_readonly=0)
Dec  6 05:05:45 np0005548915 nova_compute[254819]: 2025-12-06 10:05:45.831 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:45 np0005548915 nova_compute[254819]: 2025-12-06 10:05:45.858 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.860 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/971faad6-f548-4a54-bc9c-3aa3cca72c6f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/971faad6-f548-4a54-bc9c-3aa3cca72c6f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.861 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[202f5845-401e-413f-85ba-2f5e3fc0e1df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.863 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: global
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    log         /dev/log local0 debug
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    log-tag     haproxy-metadata-proxy-971faad6-f548-4a54-bc9c-3aa3cca72c6f
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    user        root
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    group       root
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    maxconn     1024
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    pidfile     /var/lib/neutron/external/pids/971faad6-f548-4a54-bc9c-3aa3cca72c6f.pid.haproxy
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    daemon
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: defaults
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    log global
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    mode http
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    option httplog
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    option dontlognull
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    option http-server-close
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    option forwardfor
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    retries                 3
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    timeout http-request    30s
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    timeout connect         30s
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    timeout client          32s
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    timeout server          32s
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    timeout http-keep-alive 30s
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: listen listener
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    bind 169.254.169.254:80
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]:    http-request add-header X-OVN-Network-ID 971faad6-f548-4a54-bc9c-3aa3cca72c6f
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 05:05:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.864 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'env', 'PROCESS_TAG=haproxy-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/971faad6-f548-4a54-bc9c-3aa3cca72c6f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 05:05:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v736: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Dec  6 05:05:46 np0005548915 podman[260212]: 2025-12-06 10:05:46.352322641 +0000 UTC m=+0.030821172 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  6 05:05:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:46.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:46.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:46 np0005548915 systemd-coredump[260137]: Process 259480 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 54:#012#0  0x00007fb67f92232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  6 05:05:46 np0005548915 podman[260212]: 2025-12-06 10:05:46.964339795 +0000 UTC m=+0.642838526 container create 21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 05:05:47 np0005548915 systemd[1]: Started libpod-conmon-21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4.scope.
Dec  6 05:05:47 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:05:47 np0005548915 systemd[1]: systemd-coredump@7-260134-0.service: Deactivated successfully.
Dec  6 05:05:47 np0005548915 systemd[1]: systemd-coredump@7-260134-0.service: Consumed 1.230s CPU time.
Dec  6 05:05:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d46aa1eb56c671473c7b08a45b3cc7be7a0d7e60ad9f8373b5056483f751a6f5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:47 np0005548915 podman[260212]: 2025-12-06 10:05:47.066434069 +0000 UTC m=+0.744932570 container init 21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  6 05:05:47 np0005548915 podman[260212]: 2025-12-06 10:05:47.071844185 +0000 UTC m=+0.750342686 container start 21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:05:47 np0005548915 neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f[260226]: [NOTICE]   (260240) : New worker (260247) forked
Dec  6 05:05:47 np0005548915 neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f[260226]: [NOTICE]   (260240) : Loading success.
Dec  6 05:05:47 np0005548915 podman[260232]: 2025-12-06 10:05:47.110266211 +0000 UTC m=+0.040437251 container died cb12feac15a0669dd612ec520b2008fd4691d61a8859fee5c73829837afae350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Dec  6 05:05:47 np0005548915 systemd[1]: var-lib-containers-storage-overlay-33760f1e5dff0f58c4ebac2793030140ffc34f481b06aa408ec990465208878b-merged.mount: Deactivated successfully.
Dec  6 05:05:47 np0005548915 podman[260232]: 2025-12-06 10:05:47.152608372 +0000 UTC m=+0.082779382 container remove cb12feac15a0669dd612ec520b2008fd4691d61a8859fee5c73829837afae350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  6 05:05:47 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec  6 05:05:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:47.255Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:05:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:47.255Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:05:47 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec  6 05:05:47 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.579s CPU time.
Dec  6 05:05:47 np0005548915 nova_compute[254819]: 2025-12-06 10:05:47.393 254824 DEBUG nova.network.neutron [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updated VIF entry in instance network info cache for port d4daf2d1-1774-4e84-b69b-60ba95ce1518. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:05:47 np0005548915 nova_compute[254819]: 2025-12-06 10:05:47.394 254824 DEBUG nova.network.neutron [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updating instance_info_cache with network_info: [{"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:05:47 np0005548915 nova_compute[254819]: 2025-12-06 10:05:47.440 254824 DEBUG oslo_concurrency.lockutils [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:05:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:05:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v737: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec  6 05:05:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:48.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:48.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:49 np0005548915 nova_compute[254819]: 2025-12-06 10:05:49.233 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:50 np0005548915 nova_compute[254819]: 2025-12-06 10:05:50.308 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v738: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec  6 05:05:50 np0005548915 podman[260418]: 2025-12-06 10:05:50.78830304 +0000 UTC m=+0.072417114 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 05:05:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:05:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:50.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:05:50 np0005548915 podman[260418]: 2025-12-06 10:05:50.896927899 +0000 UTC m=+0.181041953 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:05:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:50] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec  6 05:05:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:50] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec  6 05:05:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:50.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:51 np0005548915 podman[260562]: 2025-12-06 10:05:51.505711757 +0000 UTC m=+0.058142259 container exec 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:05:51 np0005548915 podman[260562]: 2025-12-06 10:05:51.520288651 +0000 UTC m=+0.072719183 container exec_died 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:05:52 np0005548915 podman[260701]: 2025-12-06 10:05:52.257234884 +0000 UTC m=+0.086432632 container exec 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 05:05:52 np0005548915 podman[260701]: 2025-12-06 10:05:52.292905376 +0000 UTC m=+0.122103024 container exec_died 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 05:05:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v739: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec  6 05:05:52 np0005548915 podman[260767]: 2025-12-06 10:05:52.555137778 +0000 UTC m=+0.056847794 container exec d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.expose-services=)
Dec  6 05:05:52 np0005548915 podman[260767]: 2025-12-06 10:05:52.566877934 +0000 UTC m=+0.068587950 container exec_died d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, vendor=Red Hat, Inc., io.buildah.version=1.28.2, release=1793, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, version=2.2.4, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vcs-type=git, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, architecture=x86_64, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph.)
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.792287) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015552792329, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 1050, "num_deletes": 256, "total_data_size": 1696934, "memory_usage": 1722496, "flush_reason": "Manual Compaction"}
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015552805630, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 1680401, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22708, "largest_seqno": 23757, "table_properties": {"data_size": 1675158, "index_size": 2703, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11296, "raw_average_key_size": 19, "raw_value_size": 1664427, "raw_average_value_size": 2864, "num_data_blocks": 118, "num_entries": 581, "num_filter_entries": 581, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015477, "oldest_key_time": 1765015477, "file_creation_time": 1765015552, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 13417 microseconds, and 4424 cpu microseconds.
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.805698) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 1680401 bytes OK
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.805728) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.810148) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.810174) EVENT_LOG_v1 {"time_micros": 1765015552810165, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.810197) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1692004, prev total WAL file size 1692004, number of live WAL files 2.
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.810871) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(1641KB)], [50(11MB)]
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015552810994, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13896549, "oldest_snapshot_seqno": -1}
Dec  6 05:05:52 np0005548915 podman[260831]: 2025-12-06 10:05:52.838559291 +0000 UTC m=+0.074994744 container exec b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:05:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:05:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:52.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:05:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100552 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5362 keys, 13714498 bytes, temperature: kUnknown
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015552908015, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13714498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13678274, "index_size": 21714, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 137272, "raw_average_key_size": 25, "raw_value_size": 13580734, "raw_average_value_size": 2532, "num_data_blocks": 884, "num_entries": 5362, "num_filter_entries": 5362, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015552, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.908364) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13714498 bytes
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.910290) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.1 rd, 141.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 11.7 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(16.4) write-amplify(8.2) OK, records in: 5896, records dropped: 534 output_compression: NoCompression
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.910309) EVENT_LOG_v1 {"time_micros": 1765015552910300, "job": 26, "event": "compaction_finished", "compaction_time_micros": 97120, "compaction_time_cpu_micros": 41865, "output_level": 6, "num_output_files": 1, "total_output_size": 13714498, "num_input_records": 5896, "num_output_records": 5362, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015552910740, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015552913072, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.810740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.913176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.913187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.913189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.913193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:05:52 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.913196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:05:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:52.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:52 np0005548915 podman[260861]: 2025-12-06 10:05:52.952552146 +0000 UTC m=+0.072951989 container exec_died b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:05:52 np0005548915 podman[260831]: 2025-12-06 10:05:52.958638299 +0000 UTC m=+0.195073723 container exec_died b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:05:53 np0005548915 podman[260904]: 2025-12-06 10:05:53.19040831 +0000 UTC m=+0.063381000 container exec fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 05:05:53 np0005548915 podman[260904]: 2025-12-06 10:05:53.401138303 +0000 UTC m=+0.274111013 container exec_died fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 05:05:53 np0005548915 podman[261014]: 2025-12-06 10:05:53.842453454 +0000 UTC m=+0.054927552 container exec cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:05:53 np0005548915 podman[261014]: 2025-12-06 10:05:53.886910363 +0000 UTC m=+0.099384441 container exec_died cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:05:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:05:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:05:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:05:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:05:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:05:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:05:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:05:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:05:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:05:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:05:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:05:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:05:54 np0005548915 ovn_controller[152417]: 2025-12-06T10:05:54Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:32:83 10.100.0.14
Dec  6 05:05:54 np0005548915 ovn_controller[152417]: 2025-12-06T10:05:54Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:32:83 10.100.0.14
Dec  6 05:05:54 np0005548915 nova_compute[254819]: 2025-12-06 10:05:54.236 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:54.238 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:05:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:54.239 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:05:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:05:54.239 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:05:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v740: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 77 op/s
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:05:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v741: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 266 KiB/s rd, 9 op/s
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:05:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:54.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:05:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:54.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:05:54 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec  6 05:05:55 np0005548915 nova_compute[254819]: 2025-12-06 10:05:55.312 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:55 np0005548915 podman[261228]: 2025-12-06 10:05:55.45534457 +0000 UTC m=+0.048544659 container create df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_liskov, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  6 05:05:55 np0005548915 systemd[1]: Started libpod-conmon-df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc.scope.
Dec  6 05:05:55 np0005548915 podman[261228]: 2025-12-06 10:05:55.433030749 +0000 UTC m=+0.026230848 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:05:55 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:05:55 np0005548915 podman[261228]: 2025-12-06 10:05:55.564172175 +0000 UTC m=+0.157372264 container init df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 05:05:55 np0005548915 podman[261228]: 2025-12-06 10:05:55.572284264 +0000 UTC m=+0.165484343 container start df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_liskov, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:05:55 np0005548915 podman[261228]: 2025-12-06 10:05:55.576302362 +0000 UTC m=+0.169502461 container attach df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_liskov, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  6 05:05:55 np0005548915 stupefied_liskov[261246]: 167 167
Dec  6 05:05:55 np0005548915 podman[261228]: 2025-12-06 10:05:55.58027223 +0000 UTC m=+0.173472289 container died df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_liskov, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  6 05:05:55 np0005548915 systemd[1]: libpod-df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc.scope: Deactivated successfully.
Dec  6 05:05:55 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6162db8090cf4fe73870df4b1f5ba78d5770cfcd495a1cf27e3d48d47958e226-merged.mount: Deactivated successfully.
Dec  6 05:05:55 np0005548915 podman[261228]: 2025-12-06 10:05:55.649368353 +0000 UTC m=+0.242568412 container remove df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_liskov, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  6 05:05:55 np0005548915 systemd[1]: libpod-conmon-df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc.scope: Deactivated successfully.
Dec  6 05:05:55 np0005548915 ceph-mon[74327]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec  6 05:05:55 np0005548915 podman[261270]: 2025-12-06 10:05:55.834578448 +0000 UTC m=+0.057881322 container create 2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_raman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  6 05:05:55 np0005548915 systemd[1]: Started libpod-conmon-2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5.scope.
Dec  6 05:05:55 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:05:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf3146f5d122286e4545e112f7fa30c2d93c0a0e1f24bef7f836c497986d5b49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf3146f5d122286e4545e112f7fa30c2d93c0a0e1f24bef7f836c497986d5b49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf3146f5d122286e4545e112f7fa30c2d93c0a0e1f24bef7f836c497986d5b49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf3146f5d122286e4545e112f7fa30c2d93c0a0e1f24bef7f836c497986d5b49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:55 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf3146f5d122286e4545e112f7fa30c2d93c0a0e1f24bef7f836c497986d5b49/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:55 np0005548915 podman[261270]: 2025-12-06 10:05:55.806526542 +0000 UTC m=+0.029829476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:05:55 np0005548915 podman[261270]: 2025-12-06 10:05:55.913009384 +0000 UTC m=+0.136312258 container init 2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:05:55 np0005548915 podman[261270]: 2025-12-06 10:05:55.921935914 +0000 UTC m=+0.145238758 container start 2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_raman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:05:55 np0005548915 podman[261270]: 2025-12-06 10:05:55.924842872 +0000 UTC m=+0.148145716 container attach 2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_raman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  6 05:05:56 np0005548915 vigorous_raman[261287]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:05:56 np0005548915 vigorous_raman[261287]: --> All data devices are unavailable
Dec  6 05:05:56 np0005548915 systemd[1]: libpod-2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5.scope: Deactivated successfully.
Dec  6 05:05:56 np0005548915 podman[261270]: 2025-12-06 10:05:56.360273025 +0000 UTC m=+0.583575889 container died 2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_raman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:05:56 np0005548915 systemd[1]: var-lib-containers-storage-overlay-bf3146f5d122286e4545e112f7fa30c2d93c0a0e1f24bef7f836c497986d5b49-merged.mount: Deactivated successfully.
Dec  6 05:05:56 np0005548915 podman[261270]: 2025-12-06 10:05:56.406129532 +0000 UTC m=+0.629432376 container remove 2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  6 05:05:56 np0005548915 systemd[1]: libpod-conmon-2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5.scope: Deactivated successfully.
Dec  6 05:05:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v742: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 266 KiB/s rd, 9 op/s
Dec  6 05:05:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:56.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:05:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:56.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:05:57 np0005548915 podman[261406]: 2025-12-06 10:05:57.046807269 +0000 UTC m=+0.051977052 container create f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shirley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:05:57 np0005548915 systemd[1]: Started libpod-conmon-f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48.scope.
Dec  6 05:05:57 np0005548915 podman[261406]: 2025-12-06 10:05:57.028741122 +0000 UTC m=+0.033910935 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:05:57 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:05:57 np0005548915 podman[261406]: 2025-12-06 10:05:57.144148204 +0000 UTC m=+0.149318017 container init f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shirley, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  6 05:05:57 np0005548915 podman[261406]: 2025-12-06 10:05:57.156108627 +0000 UTC m=+0.161278450 container start f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:05:57 np0005548915 xenodochial_shirley[261423]: 167 167
Dec  6 05:05:57 np0005548915 podman[261406]: 2025-12-06 10:05:57.165916301 +0000 UTC m=+0.171086134 container attach f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shirley, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  6 05:05:57 np0005548915 systemd[1]: libpod-f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48.scope: Deactivated successfully.
Dec  6 05:05:57 np0005548915 podman[261406]: 2025-12-06 10:05:57.16697479 +0000 UTC m=+0.172144593 container died f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  6 05:05:57 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c60764e23f8dd7aa2a260b374ffccebd588ec8bef7b228a814907be676f388c6-merged.mount: Deactivated successfully.
Dec  6 05:05:57 np0005548915 podman[261406]: 2025-12-06 10:05:57.22296858 +0000 UTC m=+0.228138383 container remove f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shirley, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:05:57 np0005548915 systemd[1]: libpod-conmon-f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48.scope: Deactivated successfully.
Dec  6 05:05:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:57.257Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:05:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:57.257Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:05:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:57.258Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:05:57 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 8.
Dec  6 05:05:57 np0005548915 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 05:05:57 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.579s CPU time.
Dec  6 05:05:57 np0005548915 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 05:05:57 np0005548915 podman[261448]: 2025-12-06 10:05:57.458228034 +0000 UTC m=+0.068238141 container create df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec  6 05:05:57 np0005548915 systemd[1]: Started libpod-conmon-df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558.scope.
Dec  6 05:05:57 np0005548915 podman[261448]: 2025-12-06 10:05:57.422257784 +0000 UTC m=+0.032267991 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:05:57 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:05:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb0fdaa6da5aac2de4d6525e1ade05771d8ddabfc549185b32eb6b1e37d9d05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb0fdaa6da5aac2de4d6525e1ade05771d8ddabfc549185b32eb6b1e37d9d05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb0fdaa6da5aac2de4d6525e1ade05771d8ddabfc549185b32eb6b1e37d9d05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb0fdaa6da5aac2de4d6525e1ade05771d8ddabfc549185b32eb6b1e37d9d05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:57 np0005548915 podman[261448]: 2025-12-06 10:05:57.542381954 +0000 UTC m=+0.152392081 container init df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:05:57 np0005548915 podman[261448]: 2025-12-06 10:05:57.551984403 +0000 UTC m=+0.161994500 container start df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:05:57 np0005548915 podman[261448]: 2025-12-06 10:05:57.560275597 +0000 UTC m=+0.170285754 container attach df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_allen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  6 05:05:57 np0005548915 podman[261482]: 2025-12-06 10:05:57.568011145 +0000 UTC m=+0.072205518 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  6 05:05:57 np0005548915 podman[261534]: 2025-12-06 10:05:57.676259104 +0000 UTC m=+0.062117246 container create 9c07cd8f5a4cefc3df35c5c289279dafc1d082a8f635dd4ffda3a0fb0dfa9d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  6 05:05:57 np0005548915 podman[261534]: 2025-12-06 10:05:57.646098251 +0000 UTC m=+0.031956413 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:05:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d9152d1c4e28a2f475bd786475ef6ecf46d90f6ad0d9809f534e4818d75aaf/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d9152d1c4e28a2f475bd786475ef6ecf46d90f6ad0d9809f534e4818d75aaf/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d9152d1c4e28a2f475bd786475ef6ecf46d90f6ad0d9809f534e4818d75aaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d9152d1c4e28a2f475bd786475ef6ecf46d90f6ad0d9809f534e4818d75aaf/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:57 np0005548915 podman[261534]: 2025-12-06 10:05:57.768780269 +0000 UTC m=+0.154638401 container init 9c07cd8f5a4cefc3df35c5c289279dafc1d082a8f635dd4ffda3a0fb0dfa9d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:05:57 np0005548915 podman[261534]: 2025-12-06 10:05:57.773397894 +0000 UTC m=+0.159256006 container start 9c07cd8f5a4cefc3df35c5c289279dafc1d082a8f635dd4ffda3a0fb0dfa9d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:05:57 np0005548915 bash[261534]: 9c07cd8f5a4cefc3df35c5c289279dafc1d082a8f635dd4ffda3a0fb0dfa9d8c
Dec  6 05:05:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  6 05:05:57 np0005548915 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 05:05:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  6 05:05:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:05:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  6 05:05:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  6 05:05:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  6 05:05:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  6 05:05:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  6 05:05:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 05:05:57 np0005548915 cool_allen[261489]: {
Dec  6 05:05:57 np0005548915 cool_allen[261489]:    "1": [
Dec  6 05:05:57 np0005548915 cool_allen[261489]:        {
Dec  6 05:05:57 np0005548915 cool_allen[261489]:            "devices": [
Dec  6 05:05:57 np0005548915 cool_allen[261489]:                "/dev/loop3"
Dec  6 05:05:57 np0005548915 cool_allen[261489]:            ],
Dec  6 05:05:57 np0005548915 cool_allen[261489]:            "lv_name": "ceph_lv0",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:            "lv_size": "21470642176",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:            "name": "ceph_lv0",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:            "tags": {
Dec  6 05:05:57 np0005548915 cool_allen[261489]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:                "ceph.cluster_name": "ceph",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:                "ceph.crush_device_class": "",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:                "ceph.encrypted": "0",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:                "ceph.osd_id": "1",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:                "ceph.type": "block",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:                "ceph.vdo": "0",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:                "ceph.with_tpm": "0"
Dec  6 05:05:57 np0005548915 cool_allen[261489]:            },
Dec  6 05:05:57 np0005548915 cool_allen[261489]:            "type": "block",
Dec  6 05:05:57 np0005548915 cool_allen[261489]:            "vg_name": "ceph_vg0"
Dec  6 05:05:57 np0005548915 cool_allen[261489]:        }
Dec  6 05:05:57 np0005548915 cool_allen[261489]:    ]
Dec  6 05:05:57 np0005548915 cool_allen[261489]: }
Dec  6 05:05:57 np0005548915 systemd[1]: libpod-df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558.scope: Deactivated successfully.
Dec  6 05:05:57 np0005548915 podman[261448]: 2025-12-06 10:05:57.912953087 +0000 UTC m=+0.522963184 container died df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_allen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 05:05:57 np0005548915 systemd[1]: var-lib-containers-storage-overlay-8cb0fdaa6da5aac2de4d6525e1ade05771d8ddabfc549185b32eb6b1e37d9d05-merged.mount: Deactivated successfully.
Dec  6 05:05:57 np0005548915 podman[261448]: 2025-12-06 10:05:57.963671586 +0000 UTC m=+0.573681683 container remove df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  6 05:05:57 np0005548915 systemd[1]: libpod-conmon-df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558.scope: Deactivated successfully.
Dec  6 05:05:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v743: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Dec  6 05:05:58 np0005548915 podman[261700]: 2025-12-06 10:05:58.761939583 +0000 UTC m=+0.072358642 container create d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:05:58 np0005548915 systemd[1]: Started libpod-conmon-d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a.scope.
Dec  6 05:05:58 np0005548915 podman[261700]: 2025-12-06 10:05:58.737450493 +0000 UTC m=+0.047869652 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:05:58 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:05:58 np0005548915 podman[261700]: 2025-12-06 10:05:58.872004141 +0000 UTC m=+0.182423220 container init d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:05:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:58.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:58 np0005548915 podman[261700]: 2025-12-06 10:05:58.882036581 +0000 UTC m=+0.192455640 container start d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:05:58 np0005548915 podman[261700]: 2025-12-06 10:05:58.885453034 +0000 UTC m=+0.195872113 container attach d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True)
Dec  6 05:05:58 np0005548915 distracted_almeida[261716]: 167 167
Dec  6 05:05:58 np0005548915 systemd[1]: libpod-d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a.scope: Deactivated successfully.
Dec  6 05:05:58 np0005548915 podman[261700]: 2025-12-06 10:05:58.892655068 +0000 UTC m=+0.203074127 container died d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_almeida, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 05:05:58 np0005548915 systemd[1]: var-lib-containers-storage-overlay-17e838d55b9b630dfd74920374842b9b23c38ad51a91b5482d904069421e07d8-merged.mount: Deactivated successfully.
Dec  6 05:05:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:05:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:05:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:58.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:05:58 np0005548915 podman[261700]: 2025-12-06 10:05:58.945607886 +0000 UTC m=+0.256026945 container remove d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_almeida, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  6 05:05:58 np0005548915 systemd[1]: libpod-conmon-d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a.scope: Deactivated successfully.
Dec  6 05:05:59 np0005548915 podman[261740]: 2025-12-06 10:05:59.141601761 +0000 UTC m=+0.051034397 container create 5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:05:59 np0005548915 systemd[1]: Started libpod-conmon-5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1.scope.
Dec  6 05:05:59 np0005548915 podman[261740]: 2025-12-06 10:05:59.119455414 +0000 UTC m=+0.028888090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:05:59 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:05:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec036c5ceb541843194cb4af76f2b8990bfd2b07030e6c03ed4f470bd972c8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec036c5ceb541843194cb4af76f2b8990bfd2b07030e6c03ed4f470bd972c8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec036c5ceb541843194cb4af76f2b8990bfd2b07030e6c03ed4f470bd972c8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec036c5ceb541843194cb4af76f2b8990bfd2b07030e6c03ed4f470bd972c8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:05:59 np0005548915 podman[261740]: 2025-12-06 10:05:59.238226448 +0000 UTC m=+0.147659164 container init 5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:05:59 np0005548915 nova_compute[254819]: 2025-12-06 10:05:59.240 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:05:59 np0005548915 podman[261740]: 2025-12-06 10:05:59.249312747 +0000 UTC m=+0.158745403 container start 5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:05:59 np0005548915 podman[261740]: 2025-12-06 10:05:59.253031757 +0000 UTC m=+0.162464393 container attach 5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamarr, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  6 05:05:59 np0005548915 lvm[261832]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:05:59 np0005548915 lvm[261832]: VG ceph_vg0 finished
Dec  6 05:06:00 np0005548915 sleepy_lamarr[261756]: {}
Dec  6 05:06:00 np0005548915 systemd[1]: libpod-5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1.scope: Deactivated successfully.
Dec  6 05:06:00 np0005548915 systemd[1]: libpod-5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1.scope: Consumed 1.411s CPU time.
Dec  6 05:06:00 np0005548915 nova_compute[254819]: 2025-12-06 10:06:00.127 254824 INFO nova.compute.manager [None req-c381409c-f4e1-4670-9fe8-eae9c687de24 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Get console output#033[00m
Dec  6 05:06:00 np0005548915 nova_compute[254819]: 2025-12-06 10:06:00.136 254824 INFO oslo.privsep.daemon [None req-c381409c-f4e1-4670-9fe8-eae9c687de24 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp97bp4g8e/privsep.sock']#033[00m
Dec  6 05:06:00 np0005548915 podman[261836]: 2025-12-06 10:06:00.162176764 +0000 UTC m=+0.045616881 container died 5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamarr, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec  6 05:06:00 np0005548915 systemd[1]: var-lib-containers-storage-overlay-8ec036c5ceb541843194cb4af76f2b8990bfd2b07030e6c03ed4f470bd972c8f-merged.mount: Deactivated successfully.
Dec  6 05:06:00 np0005548915 nova_compute[254819]: 2025-12-06 10:06:00.313 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:00 np0005548915 podman[261836]: 2025-12-06 10:06:00.34779846 +0000 UTC m=+0.231238527 container remove 5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  6 05:06:00 np0005548915 systemd[1]: libpod-conmon-5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1.scope: Deactivated successfully.
Dec  6 05:06:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:06:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:06:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:06:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:06:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v744: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Dec  6 05:06:00 np0005548915 nova_compute[254819]: 2025-12-06 10:06:00.875 254824 INFO oslo.privsep.daemon [None req-c381409c-f4e1-4670-9fe8-eae9c687de24 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  6 05:06:00 np0005548915 nova_compute[254819]: 2025-12-06 10:06:00.740 261881 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  6 05:06:00 np0005548915 nova_compute[254819]: 2025-12-06 10:06:00.747 261881 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  6 05:06:00 np0005548915 nova_compute[254819]: 2025-12-06 10:06:00.753 261881 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  6 05:06:00 np0005548915 nova_compute[254819]: 2025-12-06 10:06:00.754 261881 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261881#033[00m
Dec  6 05:06:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:06:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:00.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:06:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:00] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Dec  6 05:06:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:00] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Dec  6 05:06:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:06:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:00.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:06:00 np0005548915 nova_compute[254819]: 2025-12-06 10:06:00.983 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 05:06:01 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:06:01 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:06:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v745: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Dec  6 05:06:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:06:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:02.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:02.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:03 np0005548915 podman[261886]: 2025-12-06 10:06:03.526310197 +0000 UTC m=+0.139834562 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 05:06:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:03 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:06:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:03 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:06:04 np0005548915 nova_compute[254819]: 2025-12-06 10:06:04.246 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v746: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.5 MiB/s wr, 75 op/s
Dec  6 05:06:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:04.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:04.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:05 np0005548915 nova_compute[254819]: 2025-12-06 10:06:05.315 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v747: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  6 05:06:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:06.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:06.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:06:07.258Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:06:07 np0005548915 podman[261916]: 2025-12-06 10:06:07.438179043 +0000 UTC m=+0.063077152 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  6 05:06:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:06:08 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:08.104 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:06:08 np0005548915 nova_compute[254819]: 2025-12-06 10:06:08.104 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:08 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:08.105 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 05:06:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v748: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec  6 05:06:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:08.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:06:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:08.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:06:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec  6 05:06:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:06:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:06:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:06:09 np0005548915 nova_compute[254819]: 2025-12-06 10:06:09.249 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:09 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  6 05:06:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 05:06:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:10 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44a0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:10 np0005548915 nova_compute[254819]: 2025-12-06 10:06:10.318 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:10 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4498001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v749: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 17 KiB/s wr, 3 op/s
Dec  6 05:06:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:10 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4474000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:10.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  6 05:06:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  6 05:06:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:06:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:10.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:06:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:12 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4470000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:12 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v750: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 17 KiB/s wr, 3 op/s
Dec  6 05:06:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:06:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100612 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 05:06:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:12 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:12.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:12.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:14 np0005548915 nova_compute[254819]: 2025-12-06 10:06:14.252 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:14 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:14 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v751: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 18 KiB/s wr, 4 op/s
Dec  6 05:06:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:14 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:06:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:14.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:06:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:14.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:15.107 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:06:15 np0005548915 nova_compute[254819]: 2025-12-06 10:06:15.371 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:15 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  6 05:06:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:16 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:16 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v752: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 4.7 KiB/s wr, 2 op/s
Dec  6 05:06:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:16 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:16.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:16.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:06:17.259Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:06:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:06:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:18 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:18 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v753: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec  6 05:06:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:18 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:18.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:18.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:19 np0005548915 nova_compute[254819]: 2025-12-06 10:06:19.254 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:20 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:20 np0005548915 nova_compute[254819]: 2025-12-06 10:06:20.373 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:20 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v754: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec  6 05:06:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:20 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:20] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  6 05:06:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:20] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  6 05:06:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:20.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:20.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:22 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:22 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v755: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec  6 05:06:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:06:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:22 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:22.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:06:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:22.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:06:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:06:23
Dec  6 05:06:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:06:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:06:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.nfs', 'vms', 'images', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.control']
Dec  6 05:06:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:06:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:06:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:06:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:06:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011057152275835123 of space, bias 1.0, pg target 0.3317145682750537 quantized to 32 (current 32)
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:06:24 np0005548915 nova_compute[254819]: 2025-12-06 10:06:24.257 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:24 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4474002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:24 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:06:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v756: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec  6 05:06:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:24 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44700032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:24.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:06:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:24.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:06:25 np0005548915 nova_compute[254819]: 2025-12-06 10:06:25.375 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:26 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:26 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4474002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v757: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec  6 05:06:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:26 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:26.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:26.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:06:27.259Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:06:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:06:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:28 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44700032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:28 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:28 np0005548915 podman[261996]: 2025-12-06 10:06:28.43467583 +0000 UTC m=+0.064632214 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Dec  6 05:06:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v758: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec  6 05:06:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:28 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:28.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:06:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:28.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:06:29 np0005548915 nova_compute[254819]: 2025-12-06 10:06:29.276 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:30 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:30 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4470003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:30 np0005548915 nova_compute[254819]: 2025-12-06 10:06:30.432 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v759: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec  6 05:06:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:30 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:06:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:06:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:30.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:30.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:32 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4474003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:32 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v760: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec  6 05:06:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:06:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:32 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4470003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:06:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:32.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:32.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:33 np0005548915 nova_compute[254819]: 2025-12-06 10:06:33.500 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:34 np0005548915 nova_compute[254819]: 2025-12-06 10:06:34.279 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:34 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c003730 fd 38 proxy ignored for local
Dec  6 05:06:34 np0005548915 kernel: ganesha.nfsd[261944]: segfault at 50 ip 00007f454c3ee32e sp 00007f4518ff8210 error 4 in libntirpc.so.5.8[7f454c3d3000+2c000] likely on CPU 0 (core 0, socket 0)
Dec  6 05:06:34 np0005548915 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  6 05:06:34 np0005548915 systemd[1]: Started Process Core Dump (PID 262049/UID 0).
Dec  6 05:06:34 np0005548915 podman[262048]: 2025-12-06 10:06:34.395347146 +0000 UTC m=+0.085837095 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  6 05:06:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v761: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec  6 05:06:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:34.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:06:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:34.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:06:35 np0005548915 nova_compute[254819]: 2025-12-06 10:06:35.434 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:35 np0005548915 systemd-coredump[262055]: Process 261556 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 46:#012#0  0x00007f454c3ee32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  6 05:06:35 np0005548915 systemd[1]: systemd-coredump@8-262049-0.service: Deactivated successfully.
Dec  6 05:06:35 np0005548915 systemd[1]: systemd-coredump@8-262049-0.service: Consumed 1.115s CPU time.
Dec  6 05:06:35 np0005548915 podman[262082]: 2025-12-06 10:06:35.88504169 +0000 UTC m=+0.026350271 container died 9c07cd8f5a4cefc3df35c5c289279dafc1d082a8f635dd4ffda3a0fb0dfa9d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:06:36 np0005548915 systemd[1]: var-lib-containers-storage-overlay-66d9152d1c4e28a2f475bd786475ef6ecf46d90f6ad0d9809f534e4818d75aaf-merged.mount: Deactivated successfully.
Dec  6 05:06:36 np0005548915 podman[262082]: 2025-12-06 10:06:36.108945418 +0000 UTC m=+0.250253999 container remove 9c07cd8f5a4cefc3df35c5c289279dafc1d082a8f635dd4ffda3a0fb0dfa9d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:06:36 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec  6 05:06:36 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec  6 05:06:36 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.448s CPU time.
Dec  6 05:06:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v762: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec  6 05:06:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:36.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:06:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:36.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:06:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:06:37.260Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:06:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:06:38 np0005548915 podman[262126]: 2025-12-06 10:06:38.415575254 +0000 UTC m=+0.047969266 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  6 05:06:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v763: 337 pgs: 337 active+clean; 195 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Dec  6 05:06:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:38.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:06:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:06:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:38.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:39 np0005548915 nova_compute[254819]: 2025-12-06 10:06:39.280 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:39 np0005548915 nova_compute[254819]: 2025-12-06 10:06:39.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:06:39 np0005548915 nova_compute[254819]: 2025-12-06 10:06:39.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:06:39 np0005548915 nova_compute[254819]: 2025-12-06 10:06:39.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:06:39 np0005548915 nova_compute[254819]: 2025-12-06 10:06:39.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:06:39 np0005548915 nova_compute[254819]: 2025-12-06 10:06:39.780 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:06:39 np0005548915 nova_compute[254819]: 2025-12-06 10:06:39.781 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:06:39 np0005548915 nova_compute[254819]: 2025-12-06 10:06:39.781 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:06:39 np0005548915 nova_compute[254819]: 2025-12-06 10:06:39.781 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:06:39 np0005548915 nova_compute[254819]: 2025-12-06 10:06:39.782 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:06:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:06:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1103758457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:06:40 np0005548915 nova_compute[254819]: 2025-12-06 10:06:40.236 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:06:40 np0005548915 nova_compute[254819]: 2025-12-06 10:06:40.309 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:06:40 np0005548915 nova_compute[254819]: 2025-12-06 10:06:40.309 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:06:40 np0005548915 nova_compute[254819]: 2025-12-06 10:06:40.471 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:40 np0005548915 nova_compute[254819]: 2025-12-06 10:06:40.483 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:06:40 np0005548915 nova_compute[254819]: 2025-12-06 10:06:40.484 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4448MB free_disk=59.897621154785156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:06:40 np0005548915 nova_compute[254819]: 2025-12-06 10:06:40.484 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:06:40 np0005548915 nova_compute[254819]: 2025-12-06 10:06:40.485 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:06:40 np0005548915 nova_compute[254819]: 2025-12-06 10:06:40.574 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 9f4c3de7-de9e-45d5-b170-3469a0bd0959 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 05:06:40 np0005548915 nova_compute[254819]: 2025-12-06 10:06:40.575 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:06:40 np0005548915 nova_compute[254819]: 2025-12-06 10:06:40.575 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:06:40 np0005548915 nova_compute[254819]: 2025-12-06 10:06:40.675 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:06:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v764: 337 pgs: 337 active+clean; 195 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Dec  6 05:06:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100640 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:06:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:40] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec  6 05:06:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:40] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec  6 05:06:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:40.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:41.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:06:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1975280571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:06:41 np0005548915 nova_compute[254819]: 2025-12-06 10:06:41.145 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:06:41 np0005548915 nova_compute[254819]: 2025-12-06 10:06:41.152 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:06:41 np0005548915 nova_compute[254819]: 2025-12-06 10:06:41.171 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:06:41 np0005548915 nova_compute[254819]: 2025-12-06 10:06:41.202 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:06:41 np0005548915 nova_compute[254819]: 2025-12-06 10:06:41.203 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:06:42 np0005548915 nova_compute[254819]: 2025-12-06 10:06:42.197 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:06:42 np0005548915 nova_compute[254819]: 2025-12-06 10:06:42.221 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:06:42 np0005548915 nova_compute[254819]: 2025-12-06 10:06:42.222 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:06:42 np0005548915 nova_compute[254819]: 2025-12-06 10:06:42.222 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:06:42 np0005548915 nova_compute[254819]: 2025-12-06 10:06:42.446 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:06:42 np0005548915 nova_compute[254819]: 2025-12-06 10:06:42.447 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:06:42 np0005548915 nova_compute[254819]: 2025-12-06 10:06:42.447 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 05:06:42 np0005548915 nova_compute[254819]: 2025-12-06 10:06:42.447 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9f4c3de7-de9e-45d5-b170-3469a0bd0959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:06:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v765: 337 pgs: 337 active+clean; 195 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Dec  6 05:06:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:06:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:42.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:06:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:43.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:06:43 np0005548915 nova_compute[254819]: 2025-12-06 10:06:43.781 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updating instance_info_cache with network_info: [{"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:06:43 np0005548915 nova_compute[254819]: 2025-12-06 10:06:43.797 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:06:43 np0005548915 nova_compute[254819]: 2025-12-06 10:06:43.798 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 05:06:43 np0005548915 nova_compute[254819]: 2025-12-06 10:06:43.799 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:06:43 np0005548915 nova_compute[254819]: 2025-12-06 10:06:43.799 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:06:43 np0005548915 nova_compute[254819]: 2025-12-06 10:06:43.800 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:06:43 np0005548915 nova_compute[254819]: 2025-12-06 10:06:43.800 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:06:44 np0005548915 nova_compute[254819]: 2025-12-06 10:06:44.283 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v766: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec  6 05:06:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:44.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:06:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:45.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:06:45 np0005548915 nova_compute[254819]: 2025-12-06 10:06:45.345 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:06:45 np0005548915 nova_compute[254819]: 2025-12-06 10:06:45.474 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:46 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 9.
Dec  6 05:06:46 np0005548915 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 05:06:46 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.448s CPU time.
Dec  6 05:06:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:46.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:47.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:06:47.262Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:06:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v767: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec  6 05:06:47 np0005548915 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 05:06:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  6 05:06:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3518606672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 05:06:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  6 05:06:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3518606672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 05:06:47 np0005548915 podman[262251]: 2025-12-06 10:06:47.547203276 +0000 UTC m=+0.022871908 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:06:47 np0005548915 podman[262251]: 2025-12-06 10:06:47.68010306 +0000 UTC m=+0.155771662 container create f2727a14c8c776c3cd7e91838d6e5e786e1c034f81a93b6d591f7a9fc5c736a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:06:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a06d1e4ef00f96bb3b2a4a87962e3ae00f248f55a7d8371c9603028aaf9dae7/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  6 05:06:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a06d1e4ef00f96bb3b2a4a87962e3ae00f248f55a7d8371c9603028aaf9dae7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:06:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a06d1e4ef00f96bb3b2a4a87962e3ae00f248f55a7d8371c9603028aaf9dae7/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:06:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a06d1e4ef00f96bb3b2a4a87962e3ae00f248f55a7d8371c9603028aaf9dae7/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:06:47 np0005548915 podman[262251]: 2025-12-06 10:06:47.749124041 +0000 UTC m=+0.224792723 container init f2727a14c8c776c3cd7e91838d6e5e786e1c034f81a93b6d591f7a9fc5c736a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:06:47 np0005548915 podman[262251]: 2025-12-06 10:06:47.754116476 +0000 UTC m=+0.229785108 container start f2727a14c8c776c3cd7e91838d6e5e786e1c034f81a93b6d591f7a9fc5c736a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 05:06:47 np0005548915 bash[262251]: f2727a14c8c776c3cd7e91838d6e5e786e1c034f81a93b6d591f7a9fc5c736a2
Dec  6 05:06:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  6 05:06:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  6 05:06:47 np0005548915 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 05:06:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  6 05:06:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  6 05:06:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  6 05:06:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:06:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  6 05:06:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  6 05:06:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 05:06:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v768: 337 pgs: 337 active+clean; 178 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Dec  6 05:06:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:48.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:49.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:49 np0005548915 nova_compute[254819]: 2025-12-06 10:06:49.287 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:06:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3586066368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:06:50 np0005548915 nova_compute[254819]: 2025-12-06 10:06:50.476 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v769: 337 pgs: 337 active+clean; 178 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 70 KiB/s wr, 25 op/s
Dec  6 05:06:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:50] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec  6 05:06:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:50] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec  6 05:06:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:50.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:51.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:52 np0005548915 ovn_controller[152417]: 2025-12-06T10:06:52Z|00032|binding|INFO|Releasing lport 5fb89a54-8c63-4d33-bca3-d7130382f3f8 from this chassis (sb_readonly=0)
Dec  6 05:06:52 np0005548915 nova_compute[254819]: 2025-12-06 10:06:52.516 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v770: 337 pgs: 337 active+clean; 178 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 70 KiB/s wr, 25 op/s
Dec  6 05:06:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:06:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:52.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:53.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.347 254824 DEBUG nova.compute.manager [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-changed-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.347 254824 DEBUG nova.compute.manager [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Refreshing instance network info cache due to event network-changed-d4daf2d1-1774-4e84-b69b-60ba95ce1518. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.348 254824 DEBUG oslo_concurrency.lockutils [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.348 254824 DEBUG oslo_concurrency.lockutils [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.348 254824 DEBUG nova.network.neutron [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Refreshing network info cache for port d4daf2d1-1774-4e84-b69b-60ba95ce1518 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.442 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.443 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.443 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.444 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.444 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.446 254824 INFO nova.compute.manager [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Terminating instance#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.447 254824 DEBUG nova.compute.manager [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 05:06:53 np0005548915 kernel: tapd4daf2d1-17 (unregistering): left promiscuous mode
Dec  6 05:06:53 np0005548915 NetworkManager[48882]: <info>  [1765015613.5091] device (tapd4daf2d1-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.523 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:53 np0005548915 ovn_controller[152417]: 2025-12-06T10:06:53Z|00033|binding|INFO|Releasing lport d4daf2d1-1774-4e84-b69b-60ba95ce1518 from this chassis (sb_readonly=0)
Dec  6 05:06:53 np0005548915 ovn_controller[152417]: 2025-12-06T10:06:53Z|00034|binding|INFO|Setting lport d4daf2d1-1774-4e84-b69b-60ba95ce1518 down in Southbound
Dec  6 05:06:53 np0005548915 ovn_controller[152417]: 2025-12-06T10:06:53Z|00035|binding|INFO|Removing iface tapd4daf2d1-17 ovn-installed in OVS
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.527 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:53 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.532 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:32:83 10.100.0.14'], port_security=['fa:16:3e:a5:32:83 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9f4c3de7-de9e-45d5-b170-3469a0bd0959', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7c9b5ec-d7a8-44ba-8a79-a0a05df423dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83e40234-7108-4b28-a3a7-b2ef4fad45ac, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=d4daf2d1-1774-4e84-b69b-60ba95ce1518) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:06:53 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.534 162267 INFO neutron.agent.ovn.metadata.agent [-] Port d4daf2d1-1774-4e84-b69b-60ba95ce1518 in datapath 971faad6-f548-4a54-bc9c-3aa3cca72c6f unbound from our chassis#033[00m
Dec  6 05:06:53 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.535 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 971faad6-f548-4a54-bc9c-3aa3cca72c6f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 05:06:53 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.536 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[cb4ab85e-8a3f-4d2d-b735-7461844b8433]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:06:53 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.537 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f namespace which is not needed anymore#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.546 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:53 np0005548915 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec  6 05:06:53 np0005548915 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 16.686s CPU time.
Dec  6 05:06:53 np0005548915 systemd-machined[216202]: Machine qemu-1-instance-00000001 terminated.
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.690 254824 INFO nova.virt.libvirt.driver [-] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Instance destroyed successfully.#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.691 254824 DEBUG nova.objects.instance [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 9f4c3de7-de9e-45d5-b170-3469a0bd0959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.703 254824 DEBUG nova.virt.libvirt.vif [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1430712907',display_name='tempest-TestNetworkBasicOps-server-1430712907',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1430712907',id=1,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCAfMPOvgHaRlqGgLXkto0FcIKRTuQseDyB3UM7MdJ4qc4V82jaOJG1wyoIF6xrRvoJcXVE+RFVPueMCiHrP5rYBgCoIkNmahi09ifuS6NMzBYr/VB4Uf4Lhhp6Gu2WU0Q==',key_name='tempest-TestNetworkBasicOps-1259992561',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:05:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-m1904u1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:05:41Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=9f4c3de7-de9e-45d5-b170-3469a0bd0959,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.704 254824 DEBUG nova.network.os_vif_util [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.705 254824 DEBUG nova.network.os_vif_util [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:32:83,bridge_name='br-int',has_traffic_filtering=True,id=d4daf2d1-1774-4e84-b69b-60ba95ce1518,network=Network(971faad6-f548-4a54-bc9c-3aa3cca72c6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4daf2d1-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.705 254824 DEBUG os_vif [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:32:83,bridge_name='br-int',has_traffic_filtering=True,id=d4daf2d1-1774-4e84-b69b-60ba95ce1518,network=Network(971faad6-f548-4a54-bc9c-3aa3cca72c6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4daf2d1-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 05:06:53 np0005548915 neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f[260226]: [NOTICE]   (260240) : haproxy version is 2.8.14-c23fe91
Dec  6 05:06:53 np0005548915 neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f[260226]: [NOTICE]   (260240) : path to executable is /usr/sbin/haproxy
Dec  6 05:06:53 np0005548915 neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f[260226]: [WARNING]  (260240) : Exiting Master process...
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.708 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.708 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4daf2d1-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:06:53 np0005548915 neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f[260226]: [ALERT]    (260240) : Current worker (260247) exited with code 143 (Terminated)
Dec  6 05:06:53 np0005548915 neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f[260226]: [WARNING]  (260240) : All workers exited. Exiting... (0)
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.711 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:53 np0005548915 systemd[1]: libpod-21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4.scope: Deactivated successfully.
Dec  6 05:06:53 np0005548915 conmon[260226]: conmon 21554fb920b8cd6e7729 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4.scope/container/memory.events
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.714 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:53 np0005548915 podman[262365]: 2025-12-06 10:06:53.71861832 +0000 UTC m=+0.063155684 container died 21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.720 254824 INFO os_vif [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:32:83,bridge_name='br-int',has_traffic_filtering=True,id=d4daf2d1-1774-4e84-b69b-60ba95ce1518,network=Network(971faad6-f548-4a54-bc9c-3aa3cca72c6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4daf2d1-17')#033[00m
Dec  6 05:06:53 np0005548915 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4-userdata-shm.mount: Deactivated successfully.
Dec  6 05:06:53 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d46aa1eb56c671473c7b08a45b3cc7be7a0d7e60ad9f8373b5056483f751a6f5-merged.mount: Deactivated successfully.
Dec  6 05:06:53 np0005548915 podman[262365]: 2025-12-06 10:06:53.790810167 +0000 UTC m=+0.135347531 container cleanup 21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:06:53 np0005548915 systemd[1]: libpod-conmon-21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4.scope: Deactivated successfully.
Dec  6 05:06:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:53 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:06:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:53 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:06:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:53 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  6 05:06:53 np0005548915 podman[262425]: 2025-12-06 10:06:53.866752825 +0000 UTC m=+0.051050348 container remove 21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:06:53 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.872 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[d698ad1e-289b-4e39-aa1f-9217c550b24a]: (4, ('Sat Dec  6 10:06:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f (21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4)\n21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4\nSat Dec  6 10:06:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f (21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4)\n21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:06:53 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.874 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[7c09be19-4a0c-482e-985e-5ea17f0b1576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:06:53 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.875 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap971faad6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.882 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:53 np0005548915 kernel: tap971faad6-f0: left promiscuous mode
Dec  6 05:06:53 np0005548915 nova_compute[254819]: 2025-12-06 10:06:53.900 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:53 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.904 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[fa1a2fdb-0723-461c-bd80-f036b4ffb785]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:06:53 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.915 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[4d40db6b-8be3-4dce-aeaa-5bb463191d4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:06:53 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.917 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[bccfd7ba-8e9d-4b3c-b0fd-202ca92ef82b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:06:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:06:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:06:53 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.942 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[db436cca-9741-4406-bf04-3f49288700d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 391541, 'reachable_time': 37060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262441, 'error': None, 'target': 'ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:06:53 np0005548915 systemd[1]: run-netns-ovnmeta\x2d971faad6\x2df548\x2d4a54\x2dbc9c\x2d3aa3cca72c6f.mount: Deactivated successfully.
Dec  6 05:06:53 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.958 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 05:06:53 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.959 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[b31dd3dd-0b92-470e-a06d-9dc571fe551e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:06:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:06:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:06:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:06:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:06:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:06:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.120 254824 INFO nova.virt.libvirt.driver [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Deleting instance files /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959_del#033[00m
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.121 254824 INFO nova.virt.libvirt.driver [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Deletion of /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959_del complete#033[00m
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.188 254824 DEBUG nova.virt.libvirt.host [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.188 254824 INFO nova.virt.libvirt.host [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] UEFI support detected#033[00m
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.190 254824 INFO nova.compute.manager [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.190 254824 DEBUG oslo.service.loopingcall [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.191 254824 DEBUG nova.compute.manager [-] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.191 254824 DEBUG nova.network.neutron [-] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 05:06:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:54.238 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:06:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:54.239 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:06:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:06:54.240 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:06:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v771: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 71 KiB/s wr, 43 op/s
Dec  6 05:06:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100654 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.811 254824 DEBUG nova.network.neutron [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updated VIF entry in instance network info cache for port d4daf2d1-1774-4e84-b69b-60ba95ce1518. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.811 254824 DEBUG nova.network.neutron [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updating instance_info_cache with network_info: [{"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.844 254824 DEBUG nova.network.neutron [-] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.846 254824 DEBUG oslo_concurrency.lockutils [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.868 254824 INFO nova.compute.manager [-] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Took 0.68 seconds to deallocate network for instance.#033[00m
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.914 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.915 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:06:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:54.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:54 np0005548915 nova_compute[254819]: 2025-12-06 10:06:54.968 254824 DEBUG oslo_concurrency.processutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:06:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:55.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:06:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604483348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.430 254824 DEBUG oslo_concurrency.processutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.439 254824 DEBUG nova.compute.manager [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-vif-unplugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.440 254824 DEBUG oslo_concurrency.lockutils [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.441 254824 DEBUG oslo_concurrency.lockutils [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.442 254824 DEBUG oslo_concurrency.lockutils [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.442 254824 DEBUG nova.compute.manager [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] No waiting events found dispatching network-vif-unplugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.443 254824 WARNING nova.compute.manager [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received unexpected event network-vif-unplugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.443 254824 DEBUG nova.compute.manager [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.444 254824 DEBUG oslo_concurrency.lockutils [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.444 254824 DEBUG oslo_concurrency.lockutils [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.445 254824 DEBUG oslo_concurrency.lockutils [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.445 254824 DEBUG nova.compute.manager [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] No waiting events found dispatching network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.446 254824 WARNING nova.compute.manager [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received unexpected event network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.446 254824 DEBUG nova.compute.manager [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-vif-deleted-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.454 254824 DEBUG nova.compute.provider_tree [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.477 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.484 254824 DEBUG nova.scheduler.client.report [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.506 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.538 254824 INFO nova.scheduler.client.report [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 9f4c3de7-de9e-45d5-b170-3469a0bd0959#033[00m
Dec  6 05:06:55 np0005548915 nova_compute[254819]: 2025-12-06 10:06:55.600 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:06:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v772: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 29 op/s
Dec  6 05:06:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:56.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:57.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:06:57.263Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:06:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:06:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:57 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 05:06:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:57 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:06:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:57 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:06:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:58 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  6 05:06:58 np0005548915 nova_compute[254819]: 2025-12-06 10:06:58.712 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:06:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v773: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 16 KiB/s wr, 58 op/s
Dec  6 05:06:58 np0005548915 ceph-mgr[74618]: [dashboard INFO request] [192.168.122.100:57504] [POST] [200] [0.002s] [4.0B] [4cb14160-5b65-4afb-a82e-30454655d65e] /api/prometheus_receiver
Dec  6 05:06:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:58.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:06:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:06:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:59.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:06:59 np0005548915 podman[262471]: 2025-12-06 10:06:59.458814931 +0000 UTC m=+0.083482172 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible)
Dec  6 05:07:00 np0005548915 nova_compute[254819]: 2025-12-06 10:07:00.479 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:00 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 05:07:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:00 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:07:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:00 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:07:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v774: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 KiB/s wr, 45 op/s
Dec  6 05:07:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:00] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Dec  6 05:07:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:00] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Dec  6 05:07:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:07:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:00.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:07:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:07:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:01.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:07:01 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v775: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.7 KiB/s wr, 51 op/s
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:07:01 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:07:02 np0005548915 podman[262665]: 2025-12-06 10:07:02.064049169 +0000 UTC m=+0.065613670 container create 0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:07:02 np0005548915 systemd[1]: Started libpod-conmon-0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6.scope.
Dec  6 05:07:02 np0005548915 podman[262665]: 2025-12-06 10:07:02.036204809 +0000 UTC m=+0.037769370 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:07:02 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:07:02 np0005548915 podman[262665]: 2025-12-06 10:07:02.183652975 +0000 UTC m=+0.185217536 container init 0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_khayyam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:07:02 np0005548915 podman[262665]: 2025-12-06 10:07:02.196639016 +0000 UTC m=+0.198203517 container start 0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:07:02 np0005548915 podman[262665]: 2025-12-06 10:07:02.201764233 +0000 UTC m=+0.203328784 container attach 0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_khayyam, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  6 05:07:02 np0005548915 funny_khayyam[262681]: 167 167
Dec  6 05:07:02 np0005548915 systemd[1]: libpod-0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6.scope: Deactivated successfully.
Dec  6 05:07:02 np0005548915 podman[262665]: 2025-12-06 10:07:02.206116581 +0000 UTC m=+0.207681092 container died 0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 05:07:02 np0005548915 systemd[1]: var-lib-containers-storage-overlay-2def1e8bd4a3c7f75247f391a9a808d0347d552533bc6cb2c6ff24451dcad812-merged.mount: Deactivated successfully.
Dec  6 05:07:02 np0005548915 podman[262665]: 2025-12-06 10:07:02.259997054 +0000 UTC m=+0.261561565 container remove 0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:07:02 np0005548915 systemd[1]: libpod-conmon-0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6.scope: Deactivated successfully.
Dec  6 05:07:02 np0005548915 podman[262707]: 2025-12-06 10:07:02.490723997 +0000 UTC m=+0.057083501 container create ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  6 05:07:02 np0005548915 systemd[1]: Started libpod-conmon-ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c.scope.
Dec  6 05:07:02 np0005548915 podman[262707]: 2025-12-06 10:07:02.466736679 +0000 UTC m=+0.033096213 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:07:02 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:07:02 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf3b041e416d272e1ad027e5e821e6d3b9307f1c78f443039853b31047a3049/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:07:02 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf3b041e416d272e1ad027e5e821e6d3b9307f1c78f443039853b31047a3049/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:07:02 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf3b041e416d272e1ad027e5e821e6d3b9307f1c78f443039853b31047a3049/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:07:02 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf3b041e416d272e1ad027e5e821e6d3b9307f1c78f443039853b31047a3049/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:07:02 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf3b041e416d272e1ad027e5e821e6d3b9307f1c78f443039853b31047a3049/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:07:02 np0005548915 podman[262707]: 2025-12-06 10:07:02.577304261 +0000 UTC m=+0.143663815 container init ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_fermi, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:07:02 np0005548915 podman[262707]: 2025-12-06 10:07:02.591898455 +0000 UTC m=+0.158257939 container start ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_fermi, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:07:02 np0005548915 podman[262707]: 2025-12-06 10:07:02.595774849 +0000 UTC m=+0.162134423 container attach ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  6 05:07:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:07:02 np0005548915 nova_compute[254819]: 2025-12-06 10:07:02.889 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:02 np0005548915 sweet_fermi[262724]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:07:02 np0005548915 sweet_fermi[262724]: --> All data devices are unavailable
Dec  6 05:07:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:02.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:02 np0005548915 systemd[1]: libpod-ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c.scope: Deactivated successfully.
Dec  6 05:07:02 np0005548915 conmon[262724]: conmon ce59a0a3bd28c14d7042 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c.scope/container/memory.events
Dec  6 05:07:02 np0005548915 podman[262707]: 2025-12-06 10:07:02.958642706 +0000 UTC m=+0.525002270 container died ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 05:07:02 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7cf3b041e416d272e1ad027e5e821e6d3b9307f1c78f443039853b31047a3049-merged.mount: Deactivated successfully.
Dec  6 05:07:02 np0005548915 nova_compute[254819]: 2025-12-06 10:07:02.997 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:03 np0005548915 podman[262707]: 2025-12-06 10:07:03.020304068 +0000 UTC m=+0.586663562 container remove ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:07:03 np0005548915 systemd[1]: libpod-conmon-ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c.scope: Deactivated successfully.
Dec  6 05:07:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:03.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:03 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v776: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.1 KiB/s wr, 52 op/s
Dec  6 05:07:03 np0005548915 nova_compute[254819]: 2025-12-06 10:07:03.750 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:03 np0005548915 podman[262846]: 2025-12-06 10:07:03.7716402 +0000 UTC m=+0.061540380 container create 171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_goodall, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  6 05:07:03 np0005548915 systemd[1]: Started libpod-conmon-171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb.scope.
Dec  6 05:07:03 np0005548915 podman[262846]: 2025-12-06 10:07:03.743004708 +0000 UTC m=+0.032904938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:07:03 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:07:03 np0005548915 podman[262846]: 2025-12-06 10:07:03.868093081 +0000 UTC m=+0.157993281 container init 171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_goodall, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  6 05:07:03 np0005548915 podman[262846]: 2025-12-06 10:07:03.878589524 +0000 UTC m=+0.168489694 container start 171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_goodall, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:07:03 np0005548915 podman[262846]: 2025-12-06 10:07:03.882580481 +0000 UTC m=+0.172480661 container attach 171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  6 05:07:03 np0005548915 affectionate_goodall[262862]: 167 167
Dec  6 05:07:03 np0005548915 systemd[1]: libpod-171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb.scope: Deactivated successfully.
Dec  6 05:07:03 np0005548915 podman[262846]: 2025-12-06 10:07:03.885678606 +0000 UTC m=+0.175578756 container died 171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_goodall, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  6 05:07:03 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a20e69d3b3cae0743c5620068554120ae0b8d6406cbfffc688e13ecf495dad12-merged.mount: Deactivated successfully.
Dec  6 05:07:03 np0005548915 podman[262846]: 2025-12-06 10:07:03.92442136 +0000 UTC m=+0.214321510 container remove 171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 05:07:03 np0005548915 systemd[1]: libpod-conmon-171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb.scope: Deactivated successfully.
Dec  6 05:07:04 np0005548915 podman[262883]: 2025-12-06 10:07:04.141450303 +0000 UTC m=+0.069648219 container create cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Dec  6 05:07:04 np0005548915 systemd[1]: Started libpod-conmon-cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf.scope.
Dec  6 05:07:04 np0005548915 podman[262883]: 2025-12-06 10:07:04.111900806 +0000 UTC m=+0.040098802 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:07:04 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:07:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af378cc9f29416217231a62af50ce37004a59efc7271286c10e963cd8efb4282/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:07:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af378cc9f29416217231a62af50ce37004a59efc7271286c10e963cd8efb4282/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:07:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af378cc9f29416217231a62af50ce37004a59efc7271286c10e963cd8efb4282/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:07:04 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af378cc9f29416217231a62af50ce37004a59efc7271286c10e963cd8efb4282/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:07:04 np0005548915 podman[262883]: 2025-12-06 10:07:04.245198701 +0000 UTC m=+0.173396627 container init cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:07:04 np0005548915 podman[262883]: 2025-12-06 10:07:04.258053037 +0000 UTC m=+0.186250953 container start cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:07:04 np0005548915 podman[262883]: 2025-12-06 10:07:04.261884271 +0000 UTC m=+0.190082197 container attach cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lalande, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]: {
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:    "1": [
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:        {
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:            "devices": [
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:                "/dev/loop3"
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:            ],
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:            "lv_name": "ceph_lv0",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:            "lv_size": "21470642176",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:            "name": "ceph_lv0",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:            "tags": {
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:                "ceph.cluster_name": "ceph",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:                "ceph.crush_device_class": "",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:                "ceph.encrypted": "0",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:                "ceph.osd_id": "1",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:                "ceph.type": "block",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:                "ceph.vdo": "0",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:                "ceph.with_tpm": "0"
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:            },
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:            "type": "block",
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:            "vg_name": "ceph_vg0"
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:        }
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]:    ]
Dec  6 05:07:04 np0005548915 relaxed_lalande[262900]: }
Dec  6 05:07:04 np0005548915 systemd[1]: libpod-cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf.scope: Deactivated successfully.
Dec  6 05:07:04 np0005548915 podman[262883]: 2025-12-06 10:07:04.600226306 +0000 UTC m=+0.528424212 container died cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lalande, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  6 05:07:04 np0005548915 systemd[1]: var-lib-containers-storage-overlay-af378cc9f29416217231a62af50ce37004a59efc7271286c10e963cd8efb4282-merged.mount: Deactivated successfully.
Dec  6 05:07:04 np0005548915 podman[262883]: 2025-12-06 10:07:04.644091068 +0000 UTC m=+0.572288994 container remove cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lalande, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:07:04 np0005548915 systemd[1]: libpod-conmon-cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf.scope: Deactivated successfully.
Dec  6 05:07:04 np0005548915 podman[262909]: 2025-12-06 10:07:04.735525294 +0000 UTC m=+0.103894523 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec  6 05:07:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:04.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:07:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:05.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:07:05 np0005548915 podman[263034]: 2025-12-06 10:07:05.273873502 +0000 UTC m=+0.054360817 container create ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatelet, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  6 05:07:05 np0005548915 systemd[1]: Started libpod-conmon-ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0.scope.
Dec  6 05:07:05 np0005548915 podman[263034]: 2025-12-06 10:07:05.250848001 +0000 UTC m=+0.031335346 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:07:05 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:07:05 np0005548915 podman[263034]: 2025-12-06 10:07:05.372465441 +0000 UTC m=+0.152952766 container init ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  6 05:07:05 np0005548915 podman[263034]: 2025-12-06 10:07:05.381866975 +0000 UTC m=+0.162354280 container start ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:07:05 np0005548915 podman[263034]: 2025-12-06 10:07:05.385723488 +0000 UTC m=+0.166210823 container attach ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatelet, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:07:05 np0005548915 frosty_chatelet[263051]: 167 167
Dec  6 05:07:05 np0005548915 systemd[1]: libpod-ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0.scope: Deactivated successfully.
Dec  6 05:07:05 np0005548915 conmon[263051]: conmon ab3be5cfdf3842f133d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0.scope/container/memory.events
Dec  6 05:07:05 np0005548915 podman[263034]: 2025-12-06 10:07:05.394173606 +0000 UTC m=+0.174660941 container died ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatelet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:07:05 np0005548915 systemd[1]: var-lib-containers-storage-overlay-78f8d974e6e22a60de7e98ac26d15a3765683d3df75263191e336edd3dfc1bc7-merged.mount: Deactivated successfully.
Dec  6 05:07:05 np0005548915 podman[263034]: 2025-12-06 10:07:05.436925269 +0000 UTC m=+0.217412604 container remove ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:07:05 np0005548915 systemd[1]: libpod-conmon-ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0.scope: Deactivated successfully.
Dec  6 05:07:05 np0005548915 nova_compute[254819]: 2025-12-06 10:07:05.481 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v777: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Dec  6 05:07:05 np0005548915 podman[263076]: 2025-12-06 10:07:05.660232611 +0000 UTC m=+0.046345391 container create 540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  6 05:07:05 np0005548915 systemd[1]: Started libpod-conmon-540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee.scope.
Dec  6 05:07:05 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:07:05 np0005548915 podman[263076]: 2025-12-06 10:07:05.640966372 +0000 UTC m=+0.027079142 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:07:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d8e08af108a894f3bbba13ad20ff45f5d928ee103dfb60c44940ffebaf15b1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:07:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d8e08af108a894f3bbba13ad20ff45f5d928ee103dfb60c44940ffebaf15b1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:07:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d8e08af108a894f3bbba13ad20ff45f5d928ee103dfb60c44940ffebaf15b1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:07:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d8e08af108a894f3bbba13ad20ff45f5d928ee103dfb60c44940ffebaf15b1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:07:05 np0005548915 podman[263076]: 2025-12-06 10:07:05.754441342 +0000 UTC m=+0.140554082 container init 540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 05:07:05 np0005548915 podman[263076]: 2025-12-06 10:07:05.766908888 +0000 UTC m=+0.153021628 container start 540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  6 05:07:05 np0005548915 podman[263076]: 2025-12-06 10:07:05.770225478 +0000 UTC m=+0.156338218 container attach 540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wescoff, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:07:06 np0005548915 lvm[263167]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:07:06 np0005548915 lvm[263167]: VG ceph_vg0 finished
Dec  6 05:07:06 np0005548915 boring_wescoff[263092]: {}
Dec  6 05:07:06 np0005548915 systemd[1]: libpod-540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee.scope: Deactivated successfully.
Dec  6 05:07:06 np0005548915 systemd[1]: libpod-540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee.scope: Consumed 1.283s CPU time.
Dec  6 05:07:06 np0005548915 podman[263076]: 2025-12-06 10:07:06.565383821 +0000 UTC m=+0.951496561 container died 540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wescoff, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  6 05:07:06 np0005548915 systemd[1]: var-lib-containers-storage-overlay-0d8e08af108a894f3bbba13ad20ff45f5d928ee103dfb60c44940ffebaf15b1f-merged.mount: Deactivated successfully.
Dec  6 05:07:06 np0005548915 podman[263076]: 2025-12-06 10:07:06.61240531 +0000 UTC m=+0.998518050 container remove 540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wescoff, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:07:06 np0005548915 systemd[1]: libpod-conmon-540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee.scope: Deactivated successfully.
Dec  6 05:07:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:07:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:07:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:07:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 05:07:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:07:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:07:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32ac000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:07:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:06.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:07:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:07:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:07.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:07:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:07.264Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:07:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v778: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Dec  6 05:07:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:07:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:08 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:08 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:08 np0005548915 nova_compute[254819]: 2025-12-06 10:07:08.688 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765015613.6867456, 9f4c3de7-de9e-45d5-b170-3469a0bd0959 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:07:08 np0005548915 nova_compute[254819]: 2025-12-06 10:07:08.688 254824 INFO nova.compute.manager [-] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] VM Stopped (Lifecycle Event)#033[00m
Dec  6 05:07:08 np0005548915 nova_compute[254819]: 2025-12-06 10:07:08.709 254824 DEBUG nova.compute.manager [None req-4b996106-44a3-4e11-9ee0-853ed8978bb7 - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:07:08 np0005548915 nova_compute[254819]: 2025-12-06 10:07:08.752 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:08.846Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:07:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:08.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:07:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100708 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 05:07:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:08 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:07:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:07:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:08.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:07:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:09.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:07:09 np0005548915 podman[263223]: 2025-12-06 10:07:09.485853311 +0000 UTC m=+0.106384300 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 05:07:09 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v779: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  6 05:07:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:09 : epoch 69340037 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:07:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:09 : epoch 69340037 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:07:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:10 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:10 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:10 np0005548915 nova_compute[254819]: 2025-12-06 10:07:10.483 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:10] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Dec  6 05:07:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:10] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Dec  6 05:07:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:10 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:07:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:10.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:07:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  6 05:07:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:11.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  6 05:07:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v780: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  6 05:07:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:12 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:12 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:07:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:12 : epoch 69340037 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 05:07:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:12 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:12 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:12.942 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:07:12 np0005548915 nova_compute[254819]: 2025-12-06 10:07:12.943 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:12 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:12.945 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 05:07:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:12.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:07:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:13.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:07:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v781: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec  6 05:07:13 np0005548915 nova_compute[254819]: 2025-12-06 10:07:13.755 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:14 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:14 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:14 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:07:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:14.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:07:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:07:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:15.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:07:15 np0005548915 nova_compute[254819]: 2025-12-06 10:07:15.484 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v782: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 05:07:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:16 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:16 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100716 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 05:07:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:16 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:16.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:17.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:17.266Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:07:17 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v783: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 05:07:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:07:17 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:17.949 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:07:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:18 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:18 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:18 np0005548915 nova_compute[254819]: 2025-12-06 10:07:18.794 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:07:18 np0005548915 nova_compute[254819]: 2025-12-06 10:07:18.794 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:07:18 np0005548915 nova_compute[254819]: 2025-12-06 10:07:18.800 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:18 np0005548915 nova_compute[254819]: 2025-12-06 10:07:18.813 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 05:07:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:18.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:07:18 np0005548915 nova_compute[254819]: 2025-12-06 10:07:18.900 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:07:18 np0005548915 nova_compute[254819]: 2025-12-06 10:07:18.901 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:07:18 np0005548915 nova_compute[254819]: 2025-12-06 10:07:18.914 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 05:07:18 np0005548915 nova_compute[254819]: 2025-12-06 10:07:18.915 254824 INFO nova.compute.claims [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  6 05:07:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:18 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:18.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:19 np0005548915 nova_compute[254819]: 2025-12-06 10:07:19.048 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:07:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:19.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:19 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v784: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  6 05:07:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:07:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919256904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:07:19 np0005548915 nova_compute[254819]: 2025-12-06 10:07:19.538 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:07:19 np0005548915 nova_compute[254819]: 2025-12-06 10:07:19.545 254824 DEBUG nova.compute.provider_tree [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:07:19 np0005548915 nova_compute[254819]: 2025-12-06 10:07:19.571 254824 DEBUG nova.scheduler.client.report [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:07:19 np0005548915 nova_compute[254819]: 2025-12-06 10:07:19.602 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:07:19 np0005548915 nova_compute[254819]: 2025-12-06 10:07:19.603 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 05:07:19 np0005548915 nova_compute[254819]: 2025-12-06 10:07:19.846 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 05:07:19 np0005548915 nova_compute[254819]: 2025-12-06 10:07:19.847 254824 DEBUG nova.network.neutron [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 05:07:19 np0005548915 nova_compute[254819]: 2025-12-06 10:07:19.873 254824 INFO nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 05:07:19 np0005548915 nova_compute[254819]: 2025-12-06 10:07:19.901 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.016 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.017 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.018 254824 INFO nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Creating image(s)#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.047 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.076 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.107 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.111 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.135 254824 DEBUG nova.policy [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.183 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.184 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.186 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.186 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.218 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.222 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:07:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:20 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:20 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.487 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.556 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.620 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.721 254824 DEBUG nova.objects.instance [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:07:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:20] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Dec  6 05:07:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:20] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Dec  6 05:07:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:20 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.962 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.963 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Ensure instance console log exists: /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 05:07:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:20.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.963 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.964 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:07:20 np0005548915 nova_compute[254819]: 2025-12-06 10:07:20.965 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:07:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:07:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:21.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:07:21 np0005548915 nova_compute[254819]: 2025-12-06 10:07:21.128 254824 DEBUG nova.network.neutron [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Successfully created port: a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 05:07:21 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v785: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec  6 05:07:22 np0005548915 nova_compute[254819]: 2025-12-06 10:07:22.174 254824 DEBUG nova.network.neutron [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Successfully updated port: a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 05:07:22 np0005548915 nova_compute[254819]: 2025-12-06 10:07:22.188 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:07:22 np0005548915 nova_compute[254819]: 2025-12-06 10:07:22.188 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:07:22 np0005548915 nova_compute[254819]: 2025-12-06 10:07:22.188 254824 DEBUG nova.network.neutron [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 05:07:22 np0005548915 nova_compute[254819]: 2025-12-06 10:07:22.318 254824 DEBUG nova.compute.manager [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-changed-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:07:22 np0005548915 nova_compute[254819]: 2025-12-06 10:07:22.319 254824 DEBUG nova.compute.manager [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing instance network info cache due to event network-changed-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:07:22 np0005548915 nova_compute[254819]: 2025-12-06 10:07:22.319 254824 DEBUG oslo_concurrency.lockutils [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:07:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:22 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:22 np0005548915 nova_compute[254819]: 2025-12-06 10:07:22.409 254824 DEBUG nova.network.neutron [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 05:07:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:22 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:07:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:22 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:22.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:23.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.322 254824 DEBUG nova.network.neutron [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.351 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.351 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Instance network_info: |[{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.352 254824 DEBUG oslo_concurrency.lockutils [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.353 254824 DEBUG nova.network.neutron [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing network info cache for port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.359 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Start _get_guest_xml network_info=[{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.365 254824 WARNING nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.370 254824 DEBUG nova.virt.libvirt.host [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.370 254824 DEBUG nova.virt.libvirt.host [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.377 254824 DEBUG nova.virt.libvirt.host [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.377 254824 DEBUG nova.virt.libvirt.host [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.378 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.378 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.378 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.379 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.379 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.379 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.379 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.379 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.380 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.380 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.380 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.380 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.383 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:07:23 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v786: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.803 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4093221170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:07:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:07:23
Dec  6 05:07:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:07:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:07:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['.nfs', 'default.rgw.log', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'images', 'backups', 'default.rgw.control']
Dec  6 05:07:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.869 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.908 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.909717) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015643909800, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1098, "num_deletes": 251, "total_data_size": 1883115, "memory_usage": 1915712, "flush_reason": "Manual Compaction"}
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec  6 05:07:23 np0005548915 nova_compute[254819]: 2025-12-06 10:07:23.914 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015643929674, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1820143, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23758, "largest_seqno": 24855, "table_properties": {"data_size": 1815003, "index_size": 2600, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11653, "raw_average_key_size": 19, "raw_value_size": 1804422, "raw_average_value_size": 3084, "num_data_blocks": 116, "num_entries": 585, "num_filter_entries": 585, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015553, "oldest_key_time": 1765015553, "file_creation_time": 1765015643, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 19986 microseconds, and 8497 cpu microseconds.
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.929713) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1820143 bytes OK
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.929733) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.932558) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.932573) EVENT_LOG_v1 {"time_micros": 1765015643932568, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.932593) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1878082, prev total WAL file size 1878082, number of live WAL files 2.
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.933497) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1777KB)], [53(13MB)]
Dec  6 05:07:23 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015643933557, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 15534641, "oldest_snapshot_seqno": -1}
Dec  6 05:07:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:07:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5430 keys, 13333901 bytes, temperature: kUnknown
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015644105422, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 13333901, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13297640, "index_size": 21559, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 139361, "raw_average_key_size": 25, "raw_value_size": 13199277, "raw_average_value_size": 2430, "num_data_blocks": 875, "num_entries": 5430, "num_filter_entries": 5430, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015643, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.105914) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 13333901 bytes
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.107645) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.2 rd, 77.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 13.1 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(15.9) write-amplify(7.3) OK, records in: 5947, records dropped: 517 output_compression: NoCompression
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.107701) EVENT_LOG_v1 {"time_micros": 1765015644107656, "job": 28, "event": "compaction_finished", "compaction_time_micros": 172166, "compaction_time_cpu_micros": 26388, "output_level": 6, "num_output_files": 1, "total_output_size": 13333901, "num_input_records": 5947, "num_output_records": 5430, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015644108539, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015644111625, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.933394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.111810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.111817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.111820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.111823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.111825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:07:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:24 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:07:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/175523667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.440 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.442 254824 DEBUG nova.virt.libvirt.vif [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:07:19Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.443 254824 DEBUG nova.network.os_vif_util [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.443 254824 DEBUG nova.network.os_vif_util [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:29:20,bridge_name='br-int',has_traffic_filtering=True,id=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7,network=Network(4d9eb8be-73ac-4cfc-8821-fb41b5868957),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f5880e-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.444 254824 DEBUG nova.objects.instance [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.461 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] End _get_guest_xml xml=<domain type="kvm">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  <uuid>2ef62e22-52fc-44f3-9964-8dc9b3c20686</uuid>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  <name>instance-00000003</name>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  <memory>131072</memory>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  <vcpu>1</vcpu>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <nova:creationTime>2025-12-06 10:07:23</nova:creationTime>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <nova:flavor name="m1.nano">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <nova:memory>128</nova:memory>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <nova:disk>1</nova:disk>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <nova:swap>0</nova:swap>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <nova:vcpus>1</nova:vcpus>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      </nova:flavor>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <nova:owner>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      </nova:owner>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <nova:ports>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        </nova:port>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      </nova:ports>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    </nova:instance>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  <sysinfo type="smbios">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <entry name="manufacturer">RDO</entry>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <entry name="product">OpenStack Compute</entry>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <entry name="serial">2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <entry name="uuid">2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <entry name="family">Virtual Machine</entry>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <boot dev="hd"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <smbios mode="sysinfo"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <vmcoreinfo/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  <clock offset="utc">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <timer name="hpet" present="no"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  <cpu mode="host-model" match="exact">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <disk type="network" device="disk">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <target dev="vda" bus="virtio"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <disk type="network" device="cdrom">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <target dev="sda" bus="sata"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <interface type="ethernet">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <mac address="fa:16:3e:6c:29:20"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <mtu size="1442"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <target dev="tapa7f5880e-0f"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <serial type="pty">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <log file="/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log" append="off"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <input type="tablet" bus="usb"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <rng model="virtio">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <backend model="random">/dev/urandom</backend>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <controller type="usb" index="0"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    <memballoon model="virtio">
Dec  6 05:07:24 np0005548915 nova_compute[254819]:      <stats period="10"/>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:07:24 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:07:24 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:07:24 np0005548915 nova_compute[254819]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.462 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Preparing to wait for external event network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.463 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.463 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.463 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.464 254824 DEBUG nova.virt.libvirt.vif [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:07:19Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.465 254824 DEBUG nova.network.os_vif_util [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:07:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:24 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.465 254824 DEBUG nova.network.os_vif_util [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:29:20,bridge_name='br-int',has_traffic_filtering=True,id=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7,network=Network(4d9eb8be-73ac-4cfc-8821-fb41b5868957),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f5880e-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.466 254824 DEBUG os_vif [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:29:20,bridge_name='br-int',has_traffic_filtering=True,id=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7,network=Network(4d9eb8be-73ac-4cfc-8821-fb41b5868957),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f5880e-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.466 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.466 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.467 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.473 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.473 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7f5880e-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.474 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa7f5880e-0f, col_values=(('external_ids', {'iface-id': 'a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:29:20', 'vm-uuid': '2ef62e22-52fc-44f3-9964-8dc9b3c20686'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.476 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:24 np0005548915 NetworkManager[48882]: <info>  [1765015644.4776] manager: (tapa7f5880e-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.478 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:07:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.487 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.488 254824 INFO os_vif [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:29:20,bridge_name='br-int',has_traffic_filtering=True,id=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7,network=Network(4d9eb8be-73ac-4cfc-8821-fb41b5868957),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f5880e-0f')#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.557 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.557 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.558 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:6c:29:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.558 254824 INFO nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Using config drive#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.584 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.657 254824 DEBUG nova.network.neutron [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updated VIF entry in instance network info cache for port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.657 254824 DEBUG nova.network.neutron [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:07:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:24 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:24 np0005548915 nova_compute[254819]: 2025-12-06 10:07:24.930 254824 DEBUG oslo_concurrency.lockutils [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:07:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:24.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:25.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:25 np0005548915 nova_compute[254819]: 2025-12-06 10:07:25.489 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:25 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v787: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:07:25 np0005548915 nova_compute[254819]: 2025-12-06 10:07:25.948 254824 INFO nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Creating config drive at /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/disk.config#033[00m
Dec  6 05:07:25 np0005548915 nova_compute[254819]: 2025-12-06 10:07:25.954 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn9fzg95k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:07:26 np0005548915 nova_compute[254819]: 2025-12-06 10:07:26.095 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn9fzg95k" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:07:26 np0005548915 nova_compute[254819]: 2025-12-06 10:07:26.133 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:07:26 np0005548915 nova_compute[254819]: 2025-12-06 10:07:26.138 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/disk.config 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:07:26 np0005548915 nova_compute[254819]: 2025-12-06 10:07:26.321 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/disk.config 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:07:26 np0005548915 nova_compute[254819]: 2025-12-06 10:07:26.323 254824 INFO nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Deleting local config drive /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/disk.config because it was imported into RBD.#033[00m
Dec  6 05:07:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:26 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:26 np0005548915 kernel: tapa7f5880e-0f: entered promiscuous mode
Dec  6 05:07:26 np0005548915 NetworkManager[48882]: <info>  [1765015646.3861] manager: (tapa7f5880e-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec  6 05:07:26 np0005548915 ovn_controller[152417]: 2025-12-06T10:07:26Z|00036|binding|INFO|Claiming lport a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 for this chassis.
Dec  6 05:07:26 np0005548915 ovn_controller[152417]: 2025-12-06T10:07:26Z|00037|binding|INFO|a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7: Claiming fa:16:3e:6c:29:20 10.100.0.12
Dec  6 05:07:26 np0005548915 nova_compute[254819]: 2025-12-06 10:07:26.388 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.407 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:29:20 10.100.0.12'], port_security=['fa:16:3e:6c:29:20 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2ef62e22-52fc-44f3-9964-8dc9b3c20686', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f18b54b7-70a3-4b32-8644-f822c2e837c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d75f33c5-f6d1-4d65-a2b0-b56ec14fd7b3, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.408 162267 INFO neutron.agent.ovn.metadata.agent [-] Port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 in datapath 4d9eb8be-73ac-4cfc-8821-fb41b5868957 bound to our chassis#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.410 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d9eb8be-73ac-4cfc-8821-fb41b5868957#033[00m
Dec  6 05:07:26 np0005548915 systemd-machined[216202]: New machine qemu-2-instance-00000003.
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.426 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[01f1754f-f155-4c22-ae27-839bef3fe411]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.428 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d9eb8be-71 in ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.430 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d9eb8be-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.430 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[aeafe6e1-66b3-4ddd-9238-8d58bd2e1898]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.431 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[ec8a03b3-6a30-4048-8737-a1be75475028]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.445 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[39671493-c429-4cbe-b558-949edd7f98e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Dec  6 05:07:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:26 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:26 np0005548915 systemd-udevd[263611]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.481 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[addc850e-3ec3-45c2-8afe-79e23c250958]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 NetworkManager[48882]: <info>  [1765015646.4877] device (tapa7f5880e-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 05:07:26 np0005548915 NetworkManager[48882]: <info>  [1765015646.4884] device (tapa7f5880e-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 05:07:26 np0005548915 nova_compute[254819]: 2025-12-06 10:07:26.496 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:26 np0005548915 ovn_controller[152417]: 2025-12-06T10:07:26Z|00038|binding|INFO|Setting lport a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 ovn-installed in OVS
Dec  6 05:07:26 np0005548915 ovn_controller[152417]: 2025-12-06T10:07:26Z|00039|binding|INFO|Setting lport a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 up in Southbound
Dec  6 05:07:26 np0005548915 nova_compute[254819]: 2025-12-06 10:07:26.502 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.520 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[e0ef51f6-074d-4e1c-9fa4-c158f91272b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.525 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[d5acd5b8-1362-4597-938b-2e3da6418870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 NetworkManager[48882]: <info>  [1765015646.5264] manager: (tap4d9eb8be-70): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.558 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[ee37e136-21dc-4c05-b6bd-d8cd2134e12b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.561 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[0c649a44-e408-4344-a2e1-475429562418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 NetworkManager[48882]: <info>  [1765015646.5833] device (tap4d9eb8be-70): carrier: link connected
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.588 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[16d18a8f-5307-46af-8e57-6bd94d52f9cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.605 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[2c76331a-3945-40ab-a4e5-195349676238]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d9eb8be-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:61:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 401706, 'reachable_time': 22692, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263641, 'error': None, 'target': 'ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.621 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f51035-e4e9-46c1-8880-bdc699e66254]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:6106'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 401706, 'tstamp': 401706}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263642, 'error': None, 'target': 'ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.637 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[c58c1c6a-e80f-47cc-8ac9-36d6e09f5e3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d9eb8be-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:61:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 401706, 'reachable_time': 22692, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263643, 'error': None, 'target': 'ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.679 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[84e892f4-e501-4da9-bd09-ceec99119b60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.737 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[00f0868a-0204-4d30-9f5d-a3d5ec1aa069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.738 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d9eb8be-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.739 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.739 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d9eb8be-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:07:26 np0005548915 nova_compute[254819]: 2025-12-06 10:07:26.741 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:26 np0005548915 NetworkManager[48882]: <info>  [1765015646.7418] manager: (tap4d9eb8be-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Dec  6 05:07:26 np0005548915 kernel: tap4d9eb8be-70: entered promiscuous mode
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.743 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d9eb8be-70, col_values=(('external_ids', {'iface-id': '614c688d-e8cc-4f61-86da-0aa3c3ee7fd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:07:26 np0005548915 ovn_controller[152417]: 2025-12-06T10:07:26Z|00040|binding|INFO|Releasing lport 614c688d-e8cc-4f61-86da-0aa3c3ee7fd1 from this chassis (sb_readonly=0)
Dec  6 05:07:26 np0005548915 nova_compute[254819]: 2025-12-06 10:07:26.765 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.767 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d9eb8be-73ac-4cfc-8821-fb41b5868957.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d9eb8be-73ac-4cfc-8821-fb41b5868957.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.768 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc7650f-f71c-4be1-b5ea-492ff678423c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.769 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: global
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    log         /dev/log local0 debug
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    log-tag     haproxy-metadata-proxy-4d9eb8be-73ac-4cfc-8821-fb41b5868957
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    user        root
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    group       root
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    maxconn     1024
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    pidfile     /var/lib/neutron/external/pids/4d9eb8be-73ac-4cfc-8821-fb41b5868957.pid.haproxy
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    daemon
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: defaults
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    log global
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    mode http
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    option httplog
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    option dontlognull
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    option http-server-close
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    option forwardfor
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    retries                 3
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    timeout http-request    30s
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    timeout connect         30s
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    timeout client          32s
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    timeout server          32s
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    timeout http-keep-alive 30s
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: listen listener
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    bind 169.254.169.254:80
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]:    http-request add-header X-OVN-Network-ID 4d9eb8be-73ac-4cfc-8821-fb41b5868957
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 05:07:26 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.770 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'env', 'PROCESS_TAG=haproxy-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d9eb8be-73ac-4cfc-8821-fb41b5868957.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 05:07:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:26 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:07:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:26.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:07:26 np0005548915 nova_compute[254819]: 2025-12-06 10:07:26.971 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015646.9711125, 2ef62e22-52fc-44f3-9964-8dc9b3c20686 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:07:26 np0005548915 nova_compute[254819]: 2025-12-06 10:07:26.972 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] VM Started (Lifecycle Event)#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.003 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.007 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015646.971947, 2ef62e22-52fc-44f3-9964-8dc9b3c20686 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.007 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] VM Paused (Lifecycle Event)#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.025 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.029 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.049 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:07:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:27.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.150 254824 DEBUG nova.compute.manager [req-ffa705c9-3ad4-4c59-946c-85400704ae93 req-ab0bf95c-7275-4363-adfc-44c111e45eba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.150 254824 DEBUG oslo_concurrency.lockutils [req-ffa705c9-3ad4-4c59-946c-85400704ae93 req-ab0bf95c-7275-4363-adfc-44c111e45eba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.151 254824 DEBUG oslo_concurrency.lockutils [req-ffa705c9-3ad4-4c59-946c-85400704ae93 req-ab0bf95c-7275-4363-adfc-44c111e45eba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.151 254824 DEBUG oslo_concurrency.lockutils [req-ffa705c9-3ad4-4c59-946c-85400704ae93 req-ab0bf95c-7275-4363-adfc-44c111e45eba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.151 254824 DEBUG nova.compute.manager [req-ffa705c9-3ad4-4c59-946c-85400704ae93 req-ab0bf95c-7275-4363-adfc-44c111e45eba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Processing event network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.152 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.155 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015647.1555302, 2ef62e22-52fc-44f3-9964-8dc9b3c20686 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.156 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] VM Resumed (Lifecycle Event)#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.157 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 05:07:27 np0005548915 podman[263717]: 2025-12-06 10:07:27.159579705 +0000 UTC m=+0.050960696 container create d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.160 254824 INFO nova.virt.libvirt.driver [-] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Instance spawned successfully.#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.161 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.176 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.183 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.185 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.185 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.185 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.186 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.186 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.187 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:07:27 np0005548915 systemd[1]: Started libpod-conmon-d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb.scope.
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.221 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:07:27 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:07:27 np0005548915 podman[263717]: 2025-12-06 10:07:27.132500394 +0000 UTC m=+0.023881405 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  6 05:07:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96304682dba270089a316a6ea2c840eb8d50d3698a98881b517984b3b6c64718/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 05:07:27 np0005548915 podman[263717]: 2025-12-06 10:07:27.24358931 +0000 UTC m=+0.134970401 container init d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  6 05:07:27 np0005548915 podman[263717]: 2025-12-06 10:07:27.257172216 +0000 UTC m=+0.148553207 container start d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  6 05:07:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:27.267Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:07:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:27.268Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:07:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:27.269Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.272 254824 INFO nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Took 7.26 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.273 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:07:27 np0005548915 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [NOTICE]   (263737) : New worker (263739) forked
Dec  6 05:07:27 np0005548915 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [NOTICE]   (263737) : Loading success.
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.362 254824 INFO nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Took 8.50 seconds to build instance.#033[00m
Dec  6 05:07:27 np0005548915 nova_compute[254819]: 2025-12-06 10:07:27.380 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:07:27 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v788: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:07:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:07:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:28 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:28 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:28.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:07:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:28 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:28.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:29.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:29 np0005548915 nova_compute[254819]: 2025-12-06 10:07:29.248 254824 DEBUG nova.compute.manager [req-3ac67a8b-c543-4020-b289-a81819117029 req-2114bc02-0c8e-4251-9515-f4acccfdc695 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:07:29 np0005548915 nova_compute[254819]: 2025-12-06 10:07:29.249 254824 DEBUG oslo_concurrency.lockutils [req-3ac67a8b-c543-4020-b289-a81819117029 req-2114bc02-0c8e-4251-9515-f4acccfdc695 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:07:29 np0005548915 nova_compute[254819]: 2025-12-06 10:07:29.249 254824 DEBUG oslo_concurrency.lockutils [req-3ac67a8b-c543-4020-b289-a81819117029 req-2114bc02-0c8e-4251-9515-f4acccfdc695 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:07:29 np0005548915 nova_compute[254819]: 2025-12-06 10:07:29.249 254824 DEBUG oslo_concurrency.lockutils [req-3ac67a8b-c543-4020-b289-a81819117029 req-2114bc02-0c8e-4251-9515-f4acccfdc695 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:07:29 np0005548915 nova_compute[254819]: 2025-12-06 10:07:29.250 254824 DEBUG nova.compute.manager [req-3ac67a8b-c543-4020-b289-a81819117029 req-2114bc02-0c8e-4251-9515-f4acccfdc695 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] No waiting events found dispatching network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:07:29 np0005548915 nova_compute[254819]: 2025-12-06 10:07:29.250 254824 WARNING nova.compute.manager [req-3ac67a8b-c543-4020-b289-a81819117029 req-2114bc02-0c8e-4251-9515-f4acccfdc695 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received unexpected event network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 for instance with vm_state active and task_state None.#033[00m
Dec  6 05:07:29 np0005548915 nova_compute[254819]: 2025-12-06 10:07:29.478 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:29 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v789: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 465 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Dec  6 05:07:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:30 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:30 np0005548915 podman[263751]: 2025-12-06 10:07:30.43958511 +0000 UTC m=+0.060908793 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:07:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:30 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:30 np0005548915 nova_compute[254819]: 2025-12-06 10:07:30.492 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:30] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Dec  6 05:07:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:30] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Dec  6 05:07:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:30 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:30.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:31.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:31 np0005548915 NetworkManager[48882]: <info>  [1765015651.1053] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Dec  6 05:07:31 np0005548915 NetworkManager[48882]: <info>  [1765015651.1060] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Dec  6 05:07:31 np0005548915 nova_compute[254819]: 2025-12-06 10:07:31.104 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:31 np0005548915 ovn_controller[152417]: 2025-12-06T10:07:31Z|00041|binding|INFO|Releasing lport 614c688d-e8cc-4f61-86da-0aa3c3ee7fd1 from this chassis (sb_readonly=0)
Dec  6 05:07:31 np0005548915 ovn_controller[152417]: 2025-12-06T10:07:31Z|00042|binding|INFO|Releasing lport 614c688d-e8cc-4f61-86da-0aa3c3ee7fd1 from this chassis (sb_readonly=0)
Dec  6 05:07:31 np0005548915 nova_compute[254819]: 2025-12-06 10:07:31.155 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:31 np0005548915 nova_compute[254819]: 2025-12-06 10:07:31.158 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:31 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v790: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 465 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Dec  6 05:07:31 np0005548915 nova_compute[254819]: 2025-12-06 10:07:31.608 254824 DEBUG nova.compute.manager [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-changed-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:07:31 np0005548915 nova_compute[254819]: 2025-12-06 10:07:31.609 254824 DEBUG nova.compute.manager [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing instance network info cache due to event network-changed-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:07:31 np0005548915 nova_compute[254819]: 2025-12-06 10:07:31.609 254824 DEBUG oslo_concurrency.lockutils [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:07:31 np0005548915 nova_compute[254819]: 2025-12-06 10:07:31.609 254824 DEBUG oslo_concurrency.lockutils [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:07:31 np0005548915 nova_compute[254819]: 2025-12-06 10:07:31.610 254824 DEBUG nova.network.neutron [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing network info cache for port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:07:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:32 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:32 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:07:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:32 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0001e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:32.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:33 np0005548915 nova_compute[254819]: 2025-12-06 10:07:33.011 254824 DEBUG nova.network.neutron [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updated VIF entry in instance network info cache for port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:07:33 np0005548915 nova_compute[254819]: 2025-12-06 10:07:33.012 254824 DEBUG nova.network.neutron [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:07:33 np0005548915 nova_compute[254819]: 2025-12-06 10:07:33.035 254824 DEBUG oslo_concurrency.lockutils [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:07:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:33.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:33 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v791: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  6 05:07:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:34 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:34 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:34 np0005548915 nova_compute[254819]: 2025-12-06 10:07:34.481 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:34 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:07:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:34.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:07:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:35.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:35 np0005548915 podman[263803]: 2025-12-06 10:07:35.461437809 +0000 UTC m=+0.089869335 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 05:07:35 np0005548915 nova_compute[254819]: 2025-12-06 10:07:35.494 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:35 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v792: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  6 05:07:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:36 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:36 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:36 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:36.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:37.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:37.270Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:07:37 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v793: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  6 05:07:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:07:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:38 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:38 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:38.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:07:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:38 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:07:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:07:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:38.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:39.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:39 np0005548915 nova_compute[254819]: 2025-12-06 10:07:39.485 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:39 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v794: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  6 05:07:39 np0005548915 nova_compute[254819]: 2025-12-06 10:07:39.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:07:39 np0005548915 nova_compute[254819]: 2025-12-06 10:07:39.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:07:39 np0005548915 nova_compute[254819]: 2025-12-06 10:07:39.777 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:07:39 np0005548915 nova_compute[254819]: 2025-12-06 10:07:39.777 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:07:39 np0005548915 nova_compute[254819]: 2025-12-06 10:07:39.777 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:07:39 np0005548915 nova_compute[254819]: 2025-12-06 10:07:39.777 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:07:39 np0005548915 nova_compute[254819]: 2025-12-06 10:07:39.777 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:07:40 np0005548915 ovn_controller[152417]: 2025-12-06T10:07:40Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:29:20 10.100.0.12
Dec  6 05:07:40 np0005548915 ovn_controller[152417]: 2025-12-06T10:07:40Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:29:20 10.100.0.12
Dec  6 05:07:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:07:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/698280352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:07:40 np0005548915 nova_compute[254819]: 2025-12-06 10:07:40.282 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:07:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:40 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:40 np0005548915 nova_compute[254819]: 2025-12-06 10:07:40.368 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:07:40 np0005548915 nova_compute[254819]: 2025-12-06 10:07:40.369 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:07:40 np0005548915 podman[263858]: 2025-12-06 10:07:40.404996047 +0000 UTC m=+0.070083781 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  6 05:07:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:40 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:40 np0005548915 nova_compute[254819]: 2025-12-06 10:07:40.497 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:40 np0005548915 nova_compute[254819]: 2025-12-06 10:07:40.555 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:07:40 np0005548915 nova_compute[254819]: 2025-12-06 10:07:40.557 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4391MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:07:40 np0005548915 nova_compute[254819]: 2025-12-06 10:07:40.558 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:07:40 np0005548915 nova_compute[254819]: 2025-12-06 10:07:40.558 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:07:40 np0005548915 nova_compute[254819]: 2025-12-06 10:07:40.630 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 2ef62e22-52fc-44f3-9964-8dc9b3c20686 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 05:07:40 np0005548915 nova_compute[254819]: 2025-12-06 10:07:40.630 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:07:40 np0005548915 nova_compute[254819]: 2025-12-06 10:07:40.630 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:07:40 np0005548915 nova_compute[254819]: 2025-12-06 10:07:40.668 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:07:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:07:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:07:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:40 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:40.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:07:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:41.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:07:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:07:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639730022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:07:41 np0005548915 nova_compute[254819]: 2025-12-06 10:07:41.126 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:07:41 np0005548915 nova_compute[254819]: 2025-12-06 10:07:41.132 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:07:41 np0005548915 nova_compute[254819]: 2025-12-06 10:07:41.147 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:07:41 np0005548915 nova_compute[254819]: 2025-12-06 10:07:41.166 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:07:41 np0005548915 nova_compute[254819]: 2025-12-06 10:07:41.167 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:07:41 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v795: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Dec  6 05:07:42 np0005548915 nova_compute[254819]: 2025-12-06 10:07:42.168 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:07:42 np0005548915 nova_compute[254819]: 2025-12-06 10:07:42.169 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:07:42 np0005548915 nova_compute[254819]: 2025-12-06 10:07:42.169 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:07:42 np0005548915 nova_compute[254819]: 2025-12-06 10:07:42.169 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:07:42 np0005548915 nova_compute[254819]: 2025-12-06 10:07:42.169 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:07:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:42 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:42 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:42 np0005548915 nova_compute[254819]: 2025-12-06 10:07:42.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:07:42 np0005548915 nova_compute[254819]: 2025-12-06 10:07:42.751 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:07:42 np0005548915 nova_compute[254819]: 2025-12-06 10:07:42.751 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:07:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:07:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:42 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:42 np0005548915 nova_compute[254819]: 2025-12-06 10:07:42.984 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:07:42 np0005548915 nova_compute[254819]: 2025-12-06 10:07:42.985 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:07:42 np0005548915 nova_compute[254819]: 2025-12-06 10:07:42.985 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 05:07:42 np0005548915 nova_compute[254819]: 2025-12-06 10:07:42.985 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:07:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:42.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:43.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:43 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v796: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Dec  6 05:07:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 05:07:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5517 writes, 24K keys, 5516 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s#012Cumulative WAL: 5517 writes, 5516 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1523 writes, 6794 keys, 1523 commit groups, 1.0 writes per commit group, ingest: 11.19 MB, 0.02 MB/s#012Interval WAL: 1523 writes, 1523 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     83.0      0.46              0.11        14    0.033       0      0       0.0       0.0#012  L6      1/0   12.72 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.4     96.5     83.8      2.02              0.46        13    0.156     67K   6762       0.0       0.0#012 Sum      1/0   12.72 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   5.4     78.6     83.7      2.49              0.57        27    0.092     67K   6762       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9     90.2     91.1      0.99              0.24        12    0.083     34K   3113       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     96.5     83.8      2.02              0.46        13    0.156     67K   6762       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     84.2      0.46              0.11        13    0.035       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.038, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.20 GB write, 0.12 MB/s write, 0.19 GB read, 0.11 MB/s read, 2.5 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fd9a571350#2 capacity: 304.00 MB usage: 13.65 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000118 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(740,13.11 MB,4.3119%) FilterBlock(28,201.30 KB,0.0646641%) IndexBlock(28,356.02 KB,0.114366%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 05:07:44 np0005548915 nova_compute[254819]: 2025-12-06 10:07:44.151 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:07:44 np0005548915 nova_compute[254819]: 2025-12-06 10:07:44.178 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:07:44 np0005548915 nova_compute[254819]: 2025-12-06 10:07:44.179 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 05:07:44 np0005548915 nova_compute[254819]: 2025-12-06 10:07:44.180 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:07:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:44 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:44 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:44 np0005548915 nova_compute[254819]: 2025-12-06 10:07:44.488 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:44 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:44.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:45.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:45 np0005548915 nova_compute[254819]: 2025-12-06 10:07:45.498 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:45 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v797: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  6 05:07:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  6 05:07:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3571976026' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 05:07:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  6 05:07:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3571976026' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 05:07:46 np0005548915 nova_compute[254819]: 2025-12-06 10:07:46.026 254824 INFO nova.compute.manager [None req-a4dff3fb-086c-491f-ac98-f0609a3e12cd 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Get console output#033[00m
Dec  6 05:07:46 np0005548915 nova_compute[254819]: 2025-12-06 10:07:46.034 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 05:07:46 np0005548915 nova_compute[254819]: 2025-12-06 10:07:46.171 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:07:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:46 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:46 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:46 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:46.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:07:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:47.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:07:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:47.271Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:07:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v798: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  6 05:07:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:07:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:48 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:48 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:48.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:07:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:48 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:48.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:49.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:49 np0005548915 nova_compute[254819]: 2025-12-06 10:07:49.492 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v799: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  6 05:07:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:50 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:50 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:50 np0005548915 nova_compute[254819]: 2025-12-06 10:07:50.503 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:07:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:07:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:50 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:50.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:51.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v800: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  6 05:07:52 np0005548915 nova_compute[254819]: 2025-12-06 10:07:52.239 254824 DEBUG oslo_concurrency.lockutils [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "interface-2ef62e22-52fc-44f3-9964-8dc9b3c20686-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:07:52 np0005548915 nova_compute[254819]: 2025-12-06 10:07:52.240 254824 DEBUG oslo_concurrency.lockutils [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-2ef62e22-52fc-44f3-9964-8dc9b3c20686-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:07:52 np0005548915 nova_compute[254819]: 2025-12-06 10:07:52.241 254824 DEBUG nova.objects.instance [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'flavor' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:07:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:52 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:52 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:52 np0005548915 nova_compute[254819]: 2025-12-06 10:07:52.756 254824 DEBUG nova.objects.instance [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_requests' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:07:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:07:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:52 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:52 np0005548915 nova_compute[254819]: 2025-12-06 10:07:52.963 254824 DEBUG nova.network.neutron [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 05:07:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:07:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:52.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:07:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:53.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v801: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  6 05:07:53 np0005548915 nova_compute[254819]: 2025-12-06 10:07:53.823 254824 DEBUG nova.policy [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 05:07:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:07:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:07:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:07:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:07:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:07:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:07:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:07:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:07:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:54.239 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:07:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:54.240 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:07:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:54.241 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:07:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:54 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:54 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:54 np0005548915 nova_compute[254819]: 2025-12-06 10:07:54.521 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:54 np0005548915 nova_compute[254819]: 2025-12-06 10:07:54.912 254824 DEBUG nova.network.neutron [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Successfully created port: bf396b58-3b48-44ae-92bd-e71275c9883c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 05:07:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:54 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:55.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:55.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:55 np0005548915 nova_compute[254819]: 2025-12-06 10:07:55.506 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v802: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 0 op/s
Dec  6 05:07:55 np0005548915 nova_compute[254819]: 2025-12-06 10:07:55.854 254824 DEBUG nova.network.neutron [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Successfully updated port: bf396b58-3b48-44ae-92bd-e71275c9883c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 05:07:55 np0005548915 nova_compute[254819]: 2025-12-06 10:07:55.879 254824 DEBUG oslo_concurrency.lockutils [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:07:55 np0005548915 nova_compute[254819]: 2025-12-06 10:07:55.879 254824 DEBUG oslo_concurrency.lockutils [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:07:55 np0005548915 nova_compute[254819]: 2025-12-06 10:07:55.879 254824 DEBUG nova.network.neutron [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 05:07:55 np0005548915 nova_compute[254819]: 2025-12-06 10:07:55.984 254824 DEBUG nova.compute.manager [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-changed-bf396b58-3b48-44ae-92bd-e71275c9883c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:07:55 np0005548915 nova_compute[254819]: 2025-12-06 10:07:55.985 254824 DEBUG nova.compute.manager [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing instance network info cache due to event network-changed-bf396b58-3b48-44ae-92bd-e71275c9883c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:07:55 np0005548915 nova_compute[254819]: 2025-12-06 10:07:55.986 254824 DEBUG oslo_concurrency.lockutils [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:07:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:56 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:56 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:56 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:07:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:57.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:07:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:57.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:57.272Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:07:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:57.272Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:07:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:57.273Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:07:57 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v803: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 0 op/s
Dec  6 05:07:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:07:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:58 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:58 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:58.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:07:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:58 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:07:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:59.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:07:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:07:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:59.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.177 254824 DEBUG nova.network.neutron [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.202 254824 DEBUG oslo_concurrency.lockutils [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.204 254824 DEBUG oslo_concurrency.lockutils [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.204 254824 DEBUG nova.network.neutron [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing network info cache for port bf396b58-3b48-44ae-92bd-e71275c9883c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.208 254824 DEBUG nova.virt.libvirt.vif [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:07:27Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.208 254824 DEBUG nova.network.os_vif_util [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.209 254824 DEBUG nova.network.os_vif_util [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.210 254824 DEBUG os_vif [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.210 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.211 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.211 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.215 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.215 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf396b58-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.216 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf396b58-3b, col_values=(('external_ids', {'iface-id': 'bf396b58-3b48-44ae-92bd-e71275c9883c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:56:e3', 'vm-uuid': '2ef62e22-52fc-44f3-9964-8dc9b3c20686'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.218 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:59 np0005548915 NetworkManager[48882]: <info>  [1765015679.2195] manager: (tapbf396b58-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.223 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.228 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.229 254824 INFO os_vif [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b')#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.230 254824 DEBUG nova.virt.libvirt.vif [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:07:27Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.231 254824 DEBUG nova.network.os_vif_util [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.231 254824 DEBUG nova.network.os_vif_util [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.234 254824 DEBUG nova.virt.libvirt.guest [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] attach device xml: <interface type="ethernet">
Dec  6 05:07:59 np0005548915 nova_compute[254819]:  <mac address="fa:16:3e:9c:56:e3"/>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:  <model type="virtio"/>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:  <mtu size="1442"/>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:  <target dev="tapbf396b58-3b"/>
Dec  6 05:07:59 np0005548915 nova_compute[254819]: </interface>
Dec  6 05:07:59 np0005548915 nova_compute[254819]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 05:07:59 np0005548915 kernel: tapbf396b58-3b: entered promiscuous mode
Dec  6 05:07:59 np0005548915 NetworkManager[48882]: <info>  [1765015679.2522] manager: (tapbf396b58-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Dec  6 05:07:59 np0005548915 ovn_controller[152417]: 2025-12-06T10:07:59Z|00043|binding|INFO|Claiming lport bf396b58-3b48-44ae-92bd-e71275c9883c for this chassis.
Dec  6 05:07:59 np0005548915 ovn_controller[152417]: 2025-12-06T10:07:59Z|00044|binding|INFO|bf396b58-3b48-44ae-92bd-e71275c9883c: Claiming fa:16:3e:9c:56:e3 10.100.0.20
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.252 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.262 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:56:e3 10.100.0.20'], port_security=['fa:16:3e:9c:56:e3 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': '2ef62e22-52fc-44f3-9964-8dc9b3c20686', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b700d432-ed1c-4e29-8f64-6e35196305aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1e7cc18e-31f3-4bdb-821d-1683a210c530', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e1a9f4d-accf-4c87-b819-872eff5f1a0b, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=bf396b58-3b48-44ae-92bd-e71275c9883c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.263 162267 INFO neutron.agent.ovn.metadata.agent [-] Port bf396b58-3b48-44ae-92bd-e71275c9883c in datapath b700d432-ed1c-4e29-8f64-6e35196305aa bound to our chassis#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.264 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b700d432-ed1c-4e29-8f64-6e35196305aa#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.284 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[4fda7e65-103a-407a-a238-9e12dfd1fd4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.285 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb700d432-e1 in ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.286 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb700d432-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.286 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[7a10e162-3754-4dd6-8aa7-282cbf167aee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.287 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e4628cd5-a1f7-46ff-8180-a420270ea038]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 systemd-udevd[263949]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.292 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:59 np0005548915 ovn_controller[152417]: 2025-12-06T10:07:59Z|00045|binding|INFO|Setting lport bf396b58-3b48-44ae-92bd-e71275c9883c ovn-installed in OVS
Dec  6 05:07:59 np0005548915 ovn_controller[152417]: 2025-12-06T10:07:59Z|00046|binding|INFO|Setting lport bf396b58-3b48-44ae-92bd-e71275c9883c up in Southbound
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.303 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.308 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[a05d6be8-9583-47bb-8103-eadfbdcb0b69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 NetworkManager[48882]: <info>  [1765015679.3139] device (tapbf396b58-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 05:07:59 np0005548915 NetworkManager[48882]: <info>  [1765015679.3153] device (tapbf396b58-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.334 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ba3fdd-1e2c-4afb-b2fb-cfc1ce6cd24a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.339 254824 DEBUG nova.virt.libvirt.driver [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.339 254824 DEBUG nova.virt.libvirt.driver [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.339 254824 DEBUG nova.virt.libvirt.driver [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:6c:29:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.340 254824 DEBUG nova.virt.libvirt.driver [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:9c:56:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.370 254824 DEBUG nova.virt.libvirt.guest [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:07:59 np0005548915 nova_compute[254819]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:  <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:  <nova:creationTime>2025-12-06 10:07:59</nova:creationTime>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:  <nova:flavor name="m1.nano">
Dec  6 05:07:59 np0005548915 nova_compute[254819]:    <nova:memory>128</nova:memory>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:    <nova:disk>1</nova:disk>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:    <nova:swap>0</nova:swap>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:    <nova:vcpus>1</nova:vcpus>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:  </nova:flavor>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:  <nova:owner>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:    <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:    <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:  </nova:owner>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:  <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:  <nova:ports>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:    <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec  6 05:07:59 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:    <nova:port uuid="bf396b58-3b48-44ae-92bd-e71275c9883c">
Dec  6 05:07:59 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:07:59 np0005548915 nova_compute[254819]:  </nova:ports>
Dec  6 05:07:59 np0005548915 nova_compute[254819]: </nova:instance>
Dec  6 05:07:59 np0005548915 nova_compute[254819]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.375 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[ea333388-d071-4553-83b3-420f0a3dddfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 NetworkManager[48882]: <info>  [1765015679.3810] manager: (tapb700d432-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.380 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa4e9bc-f11d-49ab-a278-cb32e7fb1524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.398 254824 DEBUG oslo_concurrency.lockutils [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-2ef62e22-52fc-44f3-9964-8dc9b3c20686-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.419 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[d73289dc-faab-4310-ac51-0ca672a095af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.422 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[0d808f3e-3f46-4abb-830d-e8e5258673c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 NetworkManager[48882]: <info>  [1765015679.4459] device (tapb700d432-e0): carrier: link connected
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.451 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[72bb8b6f-eeca-4dcc-94eb-e4056c09a1fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.471 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[86716937-dd6d-4ea1-aa6d-8617cb4a63f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb700d432-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:b5:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 404993, 'reachable_time': 29733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263977, 'error': None, 'target': 'ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.491 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0b6539b9-0e17-4967-adff-9f7e07c0434b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:b522'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 404993, 'tstamp': 404993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263978, 'error': None, 'target': 'ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.509 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[fe18e223-f0c2-4758-9c03-6962a8bb6f96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb700d432-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:b5:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 404993, 'reachable_time': 29733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263979, 'error': None, 'target': 'ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v804: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 15 KiB/s wr, 1 op/s
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.542 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[1550d4ce-d0c7-4168-b23d-57e9f8bb0caf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.572 254824 DEBUG nova.compute.manager [req-22c5c3c8-6372-4d62-ba1d-fed2bca51b11 req-6a37efcc-a404-46d1-9412-5a8977ce1ae2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.573 254824 DEBUG oslo_concurrency.lockutils [req-22c5c3c8-6372-4d62-ba1d-fed2bca51b11 req-6a37efcc-a404-46d1-9412-5a8977ce1ae2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.573 254824 DEBUG oslo_concurrency.lockutils [req-22c5c3c8-6372-4d62-ba1d-fed2bca51b11 req-6a37efcc-a404-46d1-9412-5a8977ce1ae2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.573 254824 DEBUG oslo_concurrency.lockutils [req-22c5c3c8-6372-4d62-ba1d-fed2bca51b11 req-6a37efcc-a404-46d1-9412-5a8977ce1ae2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.574 254824 DEBUG nova.compute.manager [req-22c5c3c8-6372-4d62-ba1d-fed2bca51b11 req-6a37efcc-a404-46d1-9412-5a8977ce1ae2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] No waiting events found dispatching network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.574 254824 WARNING nova.compute.manager [req-22c5c3c8-6372-4d62-ba1d-fed2bca51b11 req-6a37efcc-a404-46d1-9412-5a8977ce1ae2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received unexpected event network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c for instance with vm_state active and task_state None.#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.609 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[81595e81-9c6b-45b5-b8b9-cc9851017791]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.611 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb700d432-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.611 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.611 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb700d432-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:07:59 np0005548915 NetworkManager[48882]: <info>  [1765015679.6141] manager: (tapb700d432-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.613 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:59 np0005548915 kernel: tapb700d432-e0: entered promiscuous mode
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.616 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.617 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb700d432-e0, col_values=(('external_ids', {'iface-id': '3214dd51-8339-49df-a992-3256b03ff074'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:07:59 np0005548915 ovn_controller[152417]: 2025-12-06T10:07:59Z|00047|binding|INFO|Releasing lport 3214dd51-8339-49df-a992-3256b03ff074 from this chassis (sb_readonly=0)
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.618 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:59 np0005548915 nova_compute[254819]: 2025-12-06 10:07:59.632 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.632 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b700d432-ed1c-4e29-8f64-6e35196305aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b700d432-ed1c-4e29-8f64-6e35196305aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.633 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[7e2a8fc7-22b4-406c-a586-a6ac6bc90586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.634 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: global
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    log         /dev/log local0 debug
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    log-tag     haproxy-metadata-proxy-b700d432-ed1c-4e29-8f64-6e35196305aa
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    user        root
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    group       root
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    maxconn     1024
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    pidfile     /var/lib/neutron/external/pids/b700d432-ed1c-4e29-8f64-6e35196305aa.pid.haproxy
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    daemon
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: defaults
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    log global
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    mode http
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    option httplog
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    option dontlognull
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    option http-server-close
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    option forwardfor
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    retries                 3
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    timeout http-request    30s
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    timeout connect         30s
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    timeout client          32s
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    timeout server          32s
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    timeout http-keep-alive 30s
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: listen listener
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    bind 169.254.169.254:80
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]:    http-request add-header X-OVN-Network-ID b700d432-ed1c-4e29-8f64-6e35196305aa
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 05:07:59 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.634 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa', 'env', 'PROCESS_TAG=haproxy-b700d432-ed1c-4e29-8f64-6e35196305aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b700d432-ed1c-4e29-8f64-6e35196305aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 05:08:00 np0005548915 podman[264011]: 2025-12-06 10:08:00.016531059 +0000 UTC m=+0.049638359 container create 6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:08:00 np0005548915 systemd[1]: Started libpod-conmon-6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33.scope.
Dec  6 05:08:00 np0005548915 podman[264011]: 2025-12-06 10:07:59.988614576 +0000 UTC m=+0.021721876 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  6 05:08:00 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:08:00 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab7a22314b67a8e06bacab4c25c79547be2603d131e46433f9dadedd7c6018f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:00 np0005548915 podman[264011]: 2025-12-06 10:08:00.124719087 +0000 UTC m=+0.157826407 container init 6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 05:08:00 np0005548915 podman[264011]: 2025-12-06 10:08:00.132803795 +0000 UTC m=+0.165911085 container start 6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:08:00 np0005548915 neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa[264025]: [NOTICE]   (264029) : New worker (264031) forked
Dec  6 05:08:00 np0005548915 neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa[264025]: [NOTICE]   (264029) : Loading success.
Dec  6 05:08:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:00 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:00 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:00 np0005548915 nova_compute[254819]: 2025-12-06 10:08:00.508 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:00 np0005548915 nova_compute[254819]: 2025-12-06 10:08:00.615 254824 DEBUG nova.network.neutron [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updated VIF entry in instance network info cache for port bf396b58-3b48-44ae-92bd-e71275c9883c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:08:00 np0005548915 nova_compute[254819]: 2025-12-06 10:08:00.616 254824 DEBUG nova.network.neutron [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:08:00 np0005548915 nova_compute[254819]: 2025-12-06 10:08:00.631 254824 DEBUG oslo_concurrency.lockutils [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:08:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  6 05:08:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  6 05:08:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:00 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:08:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:01.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:08:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:01.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.388 254824 DEBUG oslo_concurrency.lockutils [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "interface-2ef62e22-52fc-44f3-9964-8dc9b3c20686-bf396b58-3b48-44ae-92bd-e71275c9883c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.389 254824 DEBUG oslo_concurrency.lockutils [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-2ef62e22-52fc-44f3-9964-8dc9b3c20686-bf396b58-3b48-44ae-92bd-e71275c9883c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.405 254824 DEBUG nova.objects.instance [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'flavor' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.424 254824 DEBUG nova.virt.libvirt.vif [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:07:27Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.425 254824 DEBUG nova.network.os_vif_util [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.425 254824 DEBUG nova.network.os_vif_util [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.429 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.431 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.434 254824 DEBUG nova.virt.libvirt.driver [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Attempting to detach device tapbf396b58-3b from instance 2ef62e22-52fc-44f3-9964-8dc9b3c20686 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.434 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] detach device xml: <interface type="ethernet">
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <mac address="fa:16:3e:9c:56:e3"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <model type="virtio"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <mtu size="1442"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <target dev="tapbf396b58-3b"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]: </interface>
Dec  6 05:08:01 np0005548915 nova_compute[254819]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.440 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.443 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface>not found in domain: <domain type='kvm' id='2'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <name>instance-00000003</name>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <uuid>2ef62e22-52fc-44f3-9964-8dc9b3c20686</uuid>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:creationTime>2025-12-06 10:07:59</nova:creationTime>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:flavor name="m1.nano">
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:memory>128</nova:memory>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:disk>1</nova:disk>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:swap>0</nova:swap>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:vcpus>1</nova:vcpus>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </nova:flavor>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:owner>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </nova:owner>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:ports>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:port uuid="bf396b58-3b48-44ae-92bd-e71275c9883c">
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </nova:ports>
Dec  6 05:08:01 np0005548915 nova_compute[254819]: </nova:instance>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <memory unit='KiB'>131072</memory>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <vcpu placement='static'>1</vcpu>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <resource>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <partition>/machine</partition>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </resource>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <sysinfo type='smbios'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <entry name='manufacturer'>RDO</entry>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <entry name='serial'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <entry name='uuid'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <entry name='family'>Virtual Machine</entry>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <boot dev='hd'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <smbios mode='sysinfo'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <vmcoreinfo state='on'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <model fallback='forbid'>EPYC-Rome</model>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <vendor>AMD</vendor>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='x2apic'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc-deadline'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='hypervisor'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc_adjust'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='spec-ctrl'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='stibp'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='ssbd'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='cmp_legacy'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='overflow-recov'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='succor'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='ibrs'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='amd-ssbd'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='virt-ssbd'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='lbrv'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='tsc-scale'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='vmcb-clean'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='flushbyasid'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pause-filter'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pfthreshold'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svme-addr-chk'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='lfence-always-serializing'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='xsaves'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svm'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='topoext'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='npt'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='nrip-save'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <clock offset='utc'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <timer name='hpet' present='no'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <on_poweroff>destroy</on_poweroff>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <on_reboot>restart</on_reboot>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <on_crash>destroy</on_crash>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <disk type='network' device='disk'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk' index='2'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target dev='vda' bus='virtio'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='virtio-disk0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <disk type='network' device='cdrom'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config' index='1'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target dev='sda' bus='sata'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <readonly/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='sata0-0-0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pcie.0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='1' port='0x10'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.1'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='2' port='0x11'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.2'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='3' port='0x12'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.3'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='4' port='0x13'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.4'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='5' port='0x14'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.5'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='6' port='0x15'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.6'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='7' port='0x16'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.7'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='8' port='0x17'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.8'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='9' port='0x18'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.9'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='10' port='0x19'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.10'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='11' port='0x1a'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.11'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='12' port='0x1b'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.12'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='13' port='0x1c'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.13'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='14' port='0x1d'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.14'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='15' port='0x1e'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.15'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='16' port='0x1f'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.16'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='17' port='0x20'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.17'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='18' port='0x21'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.18'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='19' port='0x22'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.19'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='20' port='0x23'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.20'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='21' port='0x24'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.21'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='22' port='0x25'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.22'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='23' port='0x26'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.23'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='24' port='0x27'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.24'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='25' port='0x28'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.25'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-pci-bridge'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.26'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='usb'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='sata' index='0'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='ide'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <interface type='ethernet'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <mac address='fa:16:3e:6c:29:20'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target dev='tapa7f5880e-0f'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model type='virtio'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <mtu size='1442'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='net0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <interface type='ethernet'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <mac address='fa:16:3e:9c:56:e3'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target dev='tapbf396b58-3b'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model type='virtio'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <mtu size='1442'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='net1'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <serial type='pty'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target type='isa-serial' port='0'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <model name='isa-serial'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      </target>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target type='serial' port='0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </console>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <input type='tablet' bus='usb'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='input0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='usb' bus='0' port='1'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <input type='mouse' bus='ps2'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='input1'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <input type='keyboard' bus='ps2'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='input2'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <listen type='address' address='::0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </graphics>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <audio id='1' type='none'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='video0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <watchdog model='itco' action='reset'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='watchdog0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </watchdog>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <memballoon model='virtio'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <stats period='10'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='balloon0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <rng model='virtio'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <backend model='random'>/dev/urandom</backend>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='rng0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <label>system_u:system_r:svirt_t:s0:c237,c686</label>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c237,c686</imagelabel>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <label>+107:+107</label>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <imagelabel>+107:+107</imagelabel>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:08:01 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:08:01 np0005548915 nova_compute[254819]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.443 254824 INFO nova.virt.libvirt.driver [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully detached device tapbf396b58-3b from instance 2ef62e22-52fc-44f3-9964-8dc9b3c20686 from the persistent domain config.#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.444 254824 DEBUG nova.virt.libvirt.driver [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] (1/8): Attempting to detach device tapbf396b58-3b with device alias net1 from instance 2ef62e22-52fc-44f3-9964-8dc9b3c20686 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.444 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] detach device xml: <interface type="ethernet">
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <mac address="fa:16:3e:9c:56:e3"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <model type="virtio"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <mtu size="1442"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <target dev="tapbf396b58-3b"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]: </interface>
Dec  6 05:08:01 np0005548915 nova_compute[254819]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 05:08:01 np0005548915 podman[264041]: 2025-12-06 10:08:01.474310094 +0000 UTC m=+0.089155916 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 05:08:01 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v805: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.0 KiB/s wr, 0 op/s
Dec  6 05:08:01 np0005548915 kernel: tapbf396b58-3b (unregistering): left promiscuous mode
Dec  6 05:08:01 np0005548915 NetworkManager[48882]: <info>  [1765015681.5483] device (tapbf396b58-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 05:08:01 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:01Z|00048|binding|INFO|Releasing lport bf396b58-3b48-44ae-92bd-e71275c9883c from this chassis (sb_readonly=0)
Dec  6 05:08:01 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:01Z|00049|binding|INFO|Setting lport bf396b58-3b48-44ae-92bd-e71275c9883c down in Southbound
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.551 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:01 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:01Z|00050|binding|INFO|Removing iface tapbf396b58-3b ovn-installed in OVS
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.553 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.561 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:56:e3 10.100.0.20'], port_security=['fa:16:3e:9c:56:e3 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': '2ef62e22-52fc-44f3-9964-8dc9b3c20686', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b700d432-ed1c-4e29-8f64-6e35196305aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1e7cc18e-31f3-4bdb-821d-1683a210c530', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e1a9f4d-accf-4c87-b819-872eff5f1a0b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=bf396b58-3b48-44ae-92bd-e71275c9883c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.561 254824 DEBUG nova.virt.libvirt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Received event <DeviceRemovedEvent: 1765015681.5615783, 2ef62e22-52fc-44f3-9964-8dc9b3c20686 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.564 254824 DEBUG nova.virt.libvirt.driver [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Start waiting for the detach event from libvirt for device tapbf396b58-3b with device alias net1 for instance 2ef62e22-52fc-44f3-9964-8dc9b3c20686 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 05:08:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.564 162267 INFO neutron.agent.ovn.metadata.agent [-] Port bf396b58-3b48-44ae-92bd-e71275c9883c in datapath b700d432-ed1c-4e29-8f64-6e35196305aa unbound from our chassis#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.564 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 05:08:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.566 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b700d432-ed1c-4e29-8f64-6e35196305aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.567 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface>not found in domain: <domain type='kvm' id='2'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <name>instance-00000003</name>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <uuid>2ef62e22-52fc-44f3-9964-8dc9b3c20686</uuid>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:creationTime>2025-12-06 10:07:59</nova:creationTime>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:flavor name="m1.nano">
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:memory>128</nova:memory>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:disk>1</nova:disk>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:swap>0</nova:swap>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:vcpus>1</nova:vcpus>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </nova:flavor>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:owner>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </nova:owner>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:ports>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:port uuid="bf396b58-3b48-44ae-92bd-e71275c9883c">
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </nova:ports>
Dec  6 05:08:01 np0005548915 nova_compute[254819]: </nova:instance>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <memory unit='KiB'>131072</memory>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <vcpu placement='static'>1</vcpu>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <resource>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <partition>/machine</partition>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </resource>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <sysinfo type='smbios'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <entry name='manufacturer'>RDO</entry>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <entry name='serial'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <entry name='uuid'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <entry name='family'>Virtual Machine</entry>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <boot dev='hd'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <smbios mode='sysinfo'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <vmcoreinfo state='on'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <model fallback='forbid'>EPYC-Rome</model>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <vendor>AMD</vendor>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='x2apic'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc-deadline'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='hypervisor'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc_adjust'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='spec-ctrl'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='stibp'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='ssbd'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='cmp_legacy'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='overflow-recov'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='succor'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='ibrs'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='amd-ssbd'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='virt-ssbd'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='lbrv'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='tsc-scale'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='vmcb-clean'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='flushbyasid'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pause-filter'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pfthreshold'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svme-addr-chk'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='lfence-always-serializing'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='xsaves'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svm'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='require' name='topoext'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='npt'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <feature policy='disable' name='nrip-save'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <clock offset='utc'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <timer name='hpet' present='no'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <on_poweroff>destroy</on_poweroff>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <on_reboot>restart</on_reboot>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <on_crash>destroy</on_crash>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <disk type='network' device='disk'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk' index='2'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target dev='vda' bus='virtio'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='virtio-disk0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <disk type='network' device='cdrom'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config' index='1'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target dev='sda' bus='sata'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <readonly/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='sata0-0-0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pcie.0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='1' port='0x10'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.1'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='2' port='0x11'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.2'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='3' port='0x12'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.3'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='4' port='0x13'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.4'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='5' port='0x14'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.5'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='6' port='0x15'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.6'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='7' port='0x16'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.7'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='8' port='0x17'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.8'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.567 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[c0328f30-4bb6-47e6-950b-ffa2ce7dfd2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='9' port='0x18'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.9'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='10' port='0x19'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.10'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='11' port='0x1a'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.11'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='12' port='0x1b'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.12'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='13' port='0x1c'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.13'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='14' port='0x1d'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.14'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='15' port='0x1e'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.15'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='16' port='0x1f'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.16'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='17' port='0x20'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.17'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.568 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa namespace which is not needed anymore#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='18' port='0x21'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.18'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='19' port='0x22'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.19'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='20' port='0x23'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.20'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='21' port='0x24'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.21'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='22' port='0x25'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.22'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='23' port='0x26'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.23'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='24' port='0x27'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.24'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target chassis='25' port='0x28'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.25'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model name='pcie-pci-bridge'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='pci.26'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='usb'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <controller type='sata' index='0'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='ide'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <interface type='ethernet'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <mac address='fa:16:3e:6c:29:20'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target dev='tapa7f5880e-0f'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model type='virtio'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <mtu size='1442'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='net0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <serial type='pty'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target type='isa-serial' port='0'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:        <model name='isa-serial'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      </target>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <target type='serial' port='0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </console>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <input type='tablet' bus='usb'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='input0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='usb' bus='0' port='1'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <input type='mouse' bus='ps2'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='input1'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <input type='keyboard' bus='ps2'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='input2'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <listen type='address' address='::0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </graphics>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <audio id='1' type='none'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='video0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <watchdog model='itco' action='reset'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='watchdog0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </watchdog>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <memballoon model='virtio'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <stats period='10'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='balloon0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <rng model='virtio'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <backend model='random'>/dev/urandom</backend>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <alias name='rng0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <label>system_u:system_r:svirt_t:s0:c237,c686</label>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c237,c686</imagelabel>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <label>+107:+107</label>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <imagelabel>+107:+107</imagelabel>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:08:01 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:08:01 np0005548915 nova_compute[254819]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.567 254824 INFO nova.virt.libvirt.driver [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully detached device tapbf396b58-3b from instance 2ef62e22-52fc-44f3-9964-8dc9b3c20686 from the live domain config.#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.568 254824 DEBUG nova.virt.libvirt.vif [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:07:27Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.568 254824 DEBUG nova.network.os_vif_util [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.568 254824 DEBUG nova.network.os_vif_util [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.569 254824 DEBUG os_vif [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.570 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.570 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf396b58-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.571 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.573 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.575 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.579 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.581 254824 INFO os_vif [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b')#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.582 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:creationTime>2025-12-06 10:08:01</nova:creationTime>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:flavor name="m1.nano">
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:memory>128</nova:memory>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:disk>1</nova:disk>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:swap>0</nova:swap>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:vcpus>1</nova:vcpus>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </nova:flavor>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:owner>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </nova:owner>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  <nova:ports>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec  6 05:08:01 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:08:01 np0005548915 nova_compute[254819]:  </nova:ports>
Dec  6 05:08:01 np0005548915 nova_compute[254819]: </nova:instance>
Dec  6 05:08:01 np0005548915 nova_compute[254819]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  6 05:08:01 np0005548915 neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa[264025]: [NOTICE]   (264029) : haproxy version is 2.8.14-c23fe91
Dec  6 05:08:01 np0005548915 neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa[264025]: [NOTICE]   (264029) : path to executable is /usr/sbin/haproxy
Dec  6 05:08:01 np0005548915 neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa[264025]: [WARNING]  (264029) : Exiting Master process...
Dec  6 05:08:01 np0005548915 neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa[264025]: [ALERT]    (264029) : Current worker (264031) exited with code 143 (Terminated)
Dec  6 05:08:01 np0005548915 neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa[264025]: [WARNING]  (264029) : All workers exited. Exiting... (0)
Dec  6 05:08:01 np0005548915 systemd[1]: libpod-6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33.scope: Deactivated successfully.
Dec  6 05:08:01 np0005548915 podman[264084]: 2025-12-06 10:08:01.698064068 +0000 UTC m=+0.042897619 container died 6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.700 254824 DEBUG nova.compute.manager [req-72e202e3-7b1c-4306-b46b-b0ffbe896139 req-5f92e704-b652-44d4-9102-6732d7684129 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.702 254824 DEBUG oslo_concurrency.lockutils [req-72e202e3-7b1c-4306-b46b-b0ffbe896139 req-5f92e704-b652-44d4-9102-6732d7684129 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.702 254824 DEBUG oslo_concurrency.lockutils [req-72e202e3-7b1c-4306-b46b-b0ffbe896139 req-5f92e704-b652-44d4-9102-6732d7684129 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.703 254824 DEBUG oslo_concurrency.lockutils [req-72e202e3-7b1c-4306-b46b-b0ffbe896139 req-5f92e704-b652-44d4-9102-6732d7684129 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.703 254824 DEBUG nova.compute.manager [req-72e202e3-7b1c-4306-b46b-b0ffbe896139 req-5f92e704-b652-44d4-9102-6732d7684129 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] No waiting events found dispatching network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.703 254824 WARNING nova.compute.manager [req-72e202e3-7b1c-4306-b46b-b0ffbe896139 req-5f92e704-b652-44d4-9102-6732d7684129 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received unexpected event network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c for instance with vm_state active and task_state None.#033[00m
Dec  6 05:08:01 np0005548915 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33-userdata-shm.mount: Deactivated successfully.
Dec  6 05:08:01 np0005548915 systemd[1]: var-lib-containers-storage-overlay-4ab7a22314b67a8e06bacab4c25c79547be2603d131e46433f9dadedd7c6018f-merged.mount: Deactivated successfully.
Dec  6 05:08:01 np0005548915 podman[264084]: 2025-12-06 10:08:01.754543041 +0000 UTC m=+0.099376592 container cleanup 6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  6 05:08:01 np0005548915 systemd[1]: libpod-conmon-6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33.scope: Deactivated successfully.
Dec  6 05:08:01 np0005548915 podman[264113]: 2025-12-06 10:08:01.817911509 +0000 UTC m=+0.042906428 container remove 6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 05:08:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.823 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[17991518-929d-4883-8980-b3aad9353719]: (4, ('Sat Dec  6 10:08:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa (6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33)\n6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33\nSat Dec  6 10:08:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa (6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33)\n6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.824 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9578d6-d0d9-4cca-85eb-1512e994a9c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.825 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb700d432-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.827 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:01 np0005548915 kernel: tapb700d432-e0: left promiscuous mode
Dec  6 05:08:01 np0005548915 nova_compute[254819]: 2025-12-06 10:08:01.841 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.843 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[54ad943f-bb2e-46db-8adb-60c9e02a9f7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.862 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0d6f54-b53d-4560-96c5-4af2c9c3b7d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.863 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb7f87d-a120-4761-b10d-d49d003afc6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.877 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9841695b-f082-45a5-adde-6a86475a9463]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 404985, 'reachable_time': 15469, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264129, 'error': None, 'target': 'ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.880 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 05:08:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.880 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa3b9fb-dcae-4f23-a91e-1f98950be44b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:01 np0005548915 systemd[1]: run-netns-ovnmeta\x2db700d432\x2ded1c\x2d4e29\x2d8f64\x2d6e35196305aa.mount: Deactivated successfully.
Dec  6 05:08:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:02 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:02 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:02 np0005548915 nova_compute[254819]: 2025-12-06 10:08:02.629 254824 DEBUG oslo_concurrency.lockutils [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:08:02 np0005548915 nova_compute[254819]: 2025-12-06 10:08:02.630 254824 DEBUG oslo_concurrency.lockutils [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:08:02 np0005548915 nova_compute[254819]: 2025-12-06 10:08:02.630 254824 DEBUG nova.network.neutron [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 05:08:02 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:08:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:02 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:03.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:03.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:03 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v806: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Dec  6 05:08:03 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:03Z|00051|binding|INFO|Releasing lport 614c688d-e8cc-4f61-86da-0aa3c3ee7fd1 from this chassis (sb_readonly=0)
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.789 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.814 254824 DEBUG nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-unplugged-bf396b58-3b48-44ae-92bd-e71275c9883c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.814 254824 DEBUG oslo_concurrency.lockutils [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.815 254824 DEBUG oslo_concurrency.lockutils [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.815 254824 DEBUG oslo_concurrency.lockutils [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.815 254824 DEBUG nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] No waiting events found dispatching network-vif-unplugged-bf396b58-3b48-44ae-92bd-e71275c9883c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.815 254824 WARNING nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received unexpected event network-vif-unplugged-bf396b58-3b48-44ae-92bd-e71275c9883c for instance with vm_state active and task_state None.#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.815 254824 DEBUG nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.815 254824 DEBUG oslo_concurrency.lockutils [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.816 254824 DEBUG oslo_concurrency.lockutils [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.816 254824 DEBUG oslo_concurrency.lockutils [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.816 254824 DEBUG nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] No waiting events found dispatching network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.816 254824 WARNING nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received unexpected event network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c for instance with vm_state active and task_state None.#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.816 254824 DEBUG nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-deleted-bf396b58-3b48-44ae-92bd-e71275c9883c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.816 254824 INFO nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Neutron deleted interface bf396b58-3b48-44ae-92bd-e71275c9883c; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.817 254824 DEBUG nova.network.neutron [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.848 254824 DEBUG nova.objects.instance [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lazy-loading 'system_metadata' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.914 254824 DEBUG nova.objects.instance [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lazy-loading 'flavor' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.952 254824 DEBUG nova.virt.libvirt.vif [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:07:27Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.953 254824 DEBUG nova.network.os_vif_util [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converting VIF {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.953 254824 DEBUG nova.network.os_vif_util [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.956 254824 DEBUG nova.virt.libvirt.guest [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.959 254824 DEBUG nova.virt.libvirt.guest [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface>not found in domain: <domain type='kvm' id='2'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <name>instance-00000003</name>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <uuid>2ef62e22-52fc-44f3-9964-8dc9b3c20686</uuid>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <nova:creationTime>2025-12-06 10:08:01</nova:creationTime>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <nova:flavor name="m1.nano">
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:memory>128</nova:memory>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:disk>1</nova:disk>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:swap>0</nova:swap>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:vcpus>1</nova:vcpus>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </nova:flavor>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <nova:owner>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </nova:owner>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <nova:ports>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </nova:ports>
Dec  6 05:08:03 np0005548915 nova_compute[254819]: </nova:instance>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <memory unit='KiB'>131072</memory>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <vcpu placement='static'>1</vcpu>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <resource>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <partition>/machine</partition>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </resource>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <sysinfo type='smbios'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <entry name='manufacturer'>RDO</entry>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <entry name='serial'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <entry name='uuid'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <entry name='family'>Virtual Machine</entry>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <boot dev='hd'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <smbios mode='sysinfo'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <vmcoreinfo state='on'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <model fallback='forbid'>EPYC-Rome</model>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <vendor>AMD</vendor>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='x2apic'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc-deadline'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='hypervisor'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc_adjust'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='spec-ctrl'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='stibp'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='ssbd'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='cmp_legacy'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='overflow-recov'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='succor'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='ibrs'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='amd-ssbd'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='virt-ssbd'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='lbrv'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='tsc-scale'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='vmcb-clean'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='flushbyasid'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pause-filter'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pfthreshold'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svme-addr-chk'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='lfence-always-serializing'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='xsaves'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svm'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='topoext'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='npt'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='nrip-save'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <clock offset='utc'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <timer name='hpet' present='no'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <on_poweroff>destroy</on_poweroff>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <on_reboot>restart</on_reboot>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <on_crash>destroy</on_crash>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <disk type='network' device='disk'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk' index='2'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target dev='vda' bus='virtio'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='virtio-disk0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <disk type='network' device='cdrom'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config' index='1'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target dev='sda' bus='sata'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <readonly/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='sata0-0-0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pcie.0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='1' port='0x10'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.1'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='2' port='0x11'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.2'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='3' port='0x12'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.3'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='4' port='0x13'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.4'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='5' port='0x14'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.5'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='6' port='0x15'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.6'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='7' port='0x16'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.7'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='8' port='0x17'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.8'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='9' port='0x18'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.9'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='10' port='0x19'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.10'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='11' port='0x1a'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.11'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='12' port='0x1b'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.12'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='13' port='0x1c'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.13'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='14' port='0x1d'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.14'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='15' port='0x1e'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.15'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='16' port='0x1f'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.16'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='17' port='0x20'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.17'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='18' port='0x21'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.18'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='19' port='0x22'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.19'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='20' port='0x23'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.20'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='21' port='0x24'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.21'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='22' port='0x25'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.22'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='23' port='0x26'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.23'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='24' port='0x27'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.24'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target chassis='25' port='0x28'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.25'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model name='pcie-pci-bridge'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='pci.26'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='usb'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <controller type='sata' index='0'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='ide'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <interface type='ethernet'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <mac address='fa:16:3e:6c:29:20'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target dev='tapa7f5880e-0f'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model type='virtio'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <mtu size='1442'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='net0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <serial type='pty'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target type='isa-serial' port='0'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:        <model name='isa-serial'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      </target>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <target type='serial' port='0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </console>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <input type='tablet' bus='usb'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='input0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='usb' bus='0' port='1'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <input type='mouse' bus='ps2'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='input1'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <input type='keyboard' bus='ps2'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='input2'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <listen type='address' address='::0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </graphics>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <audio id='1' type='none'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='video0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <watchdog model='itco' action='reset'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='watchdog0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </watchdog>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <memballoon model='virtio'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <stats period='10'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='balloon0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <rng model='virtio'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <backend model='random'>/dev/urandom</backend>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <alias name='rng0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <label>system_u:system_r:svirt_t:s0:c237,c686</label>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c237,c686</imagelabel>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <label>+107:+107</label>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <imagelabel>+107:+107</imagelabel>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:08:03 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:08:03 np0005548915 nova_compute[254819]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.961 254824 DEBUG nova.virt.libvirt.guest [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 05:08:03 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.966 254824 DEBUG nova.virt.libvirt.guest [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface>not found in domain: <domain type='kvm' id='2'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <name>instance-00000003</name>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <uuid>2ef62e22-52fc-44f3-9964-8dc9b3c20686</uuid>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <nova:creationTime>2025-12-06 10:08:01</nova:creationTime>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <nova:flavor name="m1.nano">
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:memory>128</nova:memory>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:disk>1</nova:disk>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:swap>0</nova:swap>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:vcpus>1</nova:vcpus>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </nova:flavor>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <nova:owner>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </nova:owner>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <nova:ports>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </nova:ports>
Dec  6 05:08:03 np0005548915 nova_compute[254819]: </nova:instance>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <memory unit='KiB'>131072</memory>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <vcpu placement='static'>1</vcpu>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <resource>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <partition>/machine</partition>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </resource>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <sysinfo type='smbios'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <entry name='manufacturer'>RDO</entry>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <entry name='serial'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <entry name='uuid'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:      <entry name='family'>Virtual Machine</entry>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <boot dev='hd'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <smbios mode='sysinfo'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <vmcoreinfo state='on'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <model fallback='forbid'>EPYC-Rome</model>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <vendor>AMD</vendor>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='x2apic'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc-deadline'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='hypervisor'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc_adjust'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='spec-ctrl'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='stibp'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='ssbd'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='cmp_legacy'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='overflow-recov'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='succor'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='ibrs'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='amd-ssbd'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='virt-ssbd'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='lbrv'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='tsc-scale'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='vmcb-clean'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='flushbyasid'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pause-filter'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pfthreshold'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svme-addr-chk'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='lfence-always-serializing'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='xsaves'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svm'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='require' name='topoext'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='npt'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:    <feature policy='disable' name='nrip-save'/>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:08:03 np0005548915 nova_compute[254819]:  <clock offset='utc'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <timer name='hpet' present='no'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  <on_poweroff>destroy</on_poweroff>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  <on_reboot>restart</on_reboot>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  <on_crash>destroy</on_crash>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <disk type='network' device='disk'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk' index='2'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target dev='vda' bus='virtio'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='virtio-disk0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <disk type='network' device='cdrom'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config' index='1'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target dev='sda' bus='sata'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <readonly/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='sata0-0-0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pcie.0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='1' port='0x10'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.1'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='2' port='0x11'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.2'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='3' port='0x12'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.3'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='4' port='0x13'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.4'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='5' port='0x14'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.5'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='6' port='0x15'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.6'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='7' port='0x16'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.7'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='8' port='0x17'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.8'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='9' port='0x18'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.9'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='10' port='0x19'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.10'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='11' port='0x1a'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.11'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='12' port='0x1b'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.12'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='13' port='0x1c'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.13'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='14' port='0x1d'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.14'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='15' port='0x1e'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.15'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='16' port='0x1f'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.16'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='17' port='0x20'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.17'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='18' port='0x21'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.18'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='19' port='0x22'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.19'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='20' port='0x23'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.20'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='21' port='0x24'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.21'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='22' port='0x25'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.22'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='23' port='0x26'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.23'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='24' port='0x27'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.24'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target chassis='25' port='0x28'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.25'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model name='pcie-pci-bridge'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='pci.26'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='usb'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <controller type='sata' index='0'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='ide'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <interface type='ethernet'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <mac address='fa:16:3e:6c:29:20'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target dev='tapa7f5880e-0f'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model type='virtio'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <mtu size='1442'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='net0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <serial type='pty'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target type='isa-serial' port='0'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:        <model name='isa-serial'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      </target>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <target type='serial' port='0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </console>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <input type='tablet' bus='usb'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='input0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='usb' bus='0' port='1'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <input type='mouse' bus='ps2'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='input1'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <input type='keyboard' bus='ps2'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='input2'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <listen type='address' address='::0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </graphics>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <audio id='1' type='none'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='video0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <watchdog model='itco' action='reset'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='watchdog0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </watchdog>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <memballoon model='virtio'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <stats period='10'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='balloon0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <rng model='virtio'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <backend model='random'>/dev/urandom</backend>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <alias name='rng0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <label>system_u:system_r:svirt_t:s0:c237,c686</label>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c237,c686</imagelabel>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <label>+107:+107</label>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <imagelabel>+107:+107</imagelabel>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:08:04 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:08:04 np0005548915 nova_compute[254819]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.967 254824 WARNING nova.virt.libvirt.driver [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Detaching interface fa:16:3e:9c:56:e3 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapbf396b58-3b' not found.#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.969 254824 DEBUG nova.virt.libvirt.vif [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:07:27Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.970 254824 DEBUG nova.network.os_vif_util [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converting VIF {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.971 254824 DEBUG nova.network.os_vif_util [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.971 254824 DEBUG os_vif [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.974 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.975 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf396b58-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.975 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.978 254824 INFO os_vif [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b')#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:03.979 254824 DEBUG nova.virt.libvirt.guest [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  <nova:creationTime>2025-12-06 10:08:03</nova:creationTime>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  <nova:flavor name="m1.nano">
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <nova:memory>128</nova:memory>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <nova:disk>1</nova:disk>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <nova:swap>0</nova:swap>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <nova:vcpus>1</nova:vcpus>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  </nova:flavor>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  <nova:owner>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  </nova:owner>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  <nova:ports>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec  6 05:08:04 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:08:04 np0005548915 nova_compute[254819]:  </nova:ports>
Dec  6 05:08:04 np0005548915 nova_compute[254819]: </nova:instance>
Dec  6 05:08:04 np0005548915 nova_compute[254819]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  6 05:08:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:04 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:04 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.508 254824 INFO nova.network.neutron [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Port bf396b58-3b48-44ae-92bd-e71275c9883c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.509 254824 DEBUG nova.network.neutron [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.532 254824 DEBUG oslo_concurrency.lockutils [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.574 254824 DEBUG oslo_concurrency.lockutils [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-2ef62e22-52fc-44f3-9964-8dc9b3c20686-bf396b58-3b48-44ae-92bd-e71275c9883c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.846 254824 DEBUG nova.compute.manager [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-changed-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.846 254824 DEBUG nova.compute.manager [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing instance network info cache due to event network-changed-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.847 254824 DEBUG oslo_concurrency.lockutils [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.847 254824 DEBUG oslo_concurrency.lockutils [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.847 254824 DEBUG nova.network.neutron [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing network info cache for port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.901 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.902 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.902 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.903 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.903 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.904 254824 INFO nova.compute.manager [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Terminating instance#033[00m
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.905 254824 DEBUG nova.compute.manager [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 05:08:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:04 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:04 np0005548915 kernel: tapa7f5880e-0f (unregistering): left promiscuous mode
Dec  6 05:08:04 np0005548915 NetworkManager[48882]: <info>  [1765015684.9789] device (tapa7f5880e-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.984 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:04 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:04Z|00052|binding|INFO|Releasing lport a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 from this chassis (sb_readonly=0)
Dec  6 05:08:04 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:04Z|00053|binding|INFO|Setting lport a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 down in Southbound
Dec  6 05:08:04 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:04Z|00054|binding|INFO|Removing iface tapa7f5880e-0f ovn-installed in OVS
Dec  6 05:08:04 np0005548915 nova_compute[254819]: 2025-12-06 10:08:04.987 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:04 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:04.992 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:29:20 10.100.0.12'], port_security=['fa:16:3e:6c:29:20 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2ef62e22-52fc-44f3-9964-8dc9b3c20686', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f18b54b7-70a3-4b32-8644-f822c2e837c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d75f33c5-f6d1-4d65-a2b0-b56ec14fd7b3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:08:04 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:04.993 162267 INFO neutron.agent.ovn.metadata.agent [-] Port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 in datapath 4d9eb8be-73ac-4cfc-8821-fb41b5868957 unbound from our chassis#033[00m
Dec  6 05:08:04 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:04.995 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d9eb8be-73ac-4cfc-8821-fb41b5868957, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 05:08:04 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:04.996 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0b42076e-3e88-4e2f-ac0d-691257f43848]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:04 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:04.996 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957 namespace which is not needed anymore#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.008 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:08:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:05.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:08:05 np0005548915 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec  6 05:08:05 np0005548915 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 15.042s CPU time.
Dec  6 05:08:05 np0005548915 systemd-machined[216202]: Machine qemu-2-instance-00000003 terminated.
Dec  6 05:08:05 np0005548915 kernel: tapa7f5880e-0f: entered promiscuous mode
Dec  6 05:08:05 np0005548915 kernel: tapa7f5880e-0f (unregistering): left promiscuous mode
Dec  6 05:08:05 np0005548915 NetworkManager[48882]: <info>  [1765015685.1315] manager: (tapa7f5880e-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Dec  6 05:08:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:08:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:05.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.138 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:05 np0005548915 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [NOTICE]   (263737) : haproxy version is 2.8.14-c23fe91
Dec  6 05:08:05 np0005548915 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [NOTICE]   (263737) : path to executable is /usr/sbin/haproxy
Dec  6 05:08:05 np0005548915 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [WARNING]  (263737) : Exiting Master process...
Dec  6 05:08:05 np0005548915 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [WARNING]  (263737) : Exiting Master process...
Dec  6 05:08:05 np0005548915 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [ALERT]    (263737) : Current worker (263739) exited with code 143 (Terminated)
Dec  6 05:08:05 np0005548915 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [WARNING]  (263737) : All workers exited. Exiting... (0)
Dec  6 05:08:05 np0005548915 systemd[1]: libpod-d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb.scope: Deactivated successfully.
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.150 254824 INFO nova.virt.libvirt.driver [-] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Instance destroyed successfully.#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.151 254824 DEBUG nova.objects.instance [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:08:05 np0005548915 podman[264158]: 2025-12-06 10:08:05.152769406 +0000 UTC m=+0.050995546 container died d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.165 254824 DEBUG nova.virt.libvirt.vif [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:07:27Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.166 254824 DEBUG nova.network.os_vif_util [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.167 254824 DEBUG nova.network.os_vif_util [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:29:20,bridge_name='br-int',has_traffic_filtering=True,id=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7,network=Network(4d9eb8be-73ac-4cfc-8821-fb41b5868957),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f5880e-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.168 254824 DEBUG os_vif [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:29:20,bridge_name='br-int',has_traffic_filtering=True,id=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7,network=Network(4d9eb8be-73ac-4cfc-8821-fb41b5868957),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f5880e-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.169 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.169 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7f5880e-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.171 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.174 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.182 254824 INFO os_vif [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:29:20,bridge_name='br-int',has_traffic_filtering=True,id=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7,network=Network(4d9eb8be-73ac-4cfc-8821-fb41b5868957),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f5880e-0f')#033[00m
Dec  6 05:08:05 np0005548915 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb-userdata-shm.mount: Deactivated successfully.
Dec  6 05:08:05 np0005548915 systemd[1]: var-lib-containers-storage-overlay-96304682dba270089a316a6ea2c840eb8d50d3698a98881b517984b3b6c64718-merged.mount: Deactivated successfully.
Dec  6 05:08:05 np0005548915 podman[264158]: 2025-12-06 10:08:05.196518805 +0000 UTC m=+0.094744915 container cleanup d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 05:08:05 np0005548915 systemd[1]: libpod-conmon-d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb.scope: Deactivated successfully.
Dec  6 05:08:05 np0005548915 podman[264211]: 2025-12-06 10:08:05.270586393 +0000 UTC m=+0.045748554 container remove d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 05:08:05 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.277 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[22a8a150-5c73-4ca6-9583-a4c9ec1370e7]: (4, ('Sat Dec  6 10:08:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957 (d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb)\nd17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb\nSat Dec  6 10:08:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957 (d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb)\nd17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:05 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.280 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[5c068b25-b985-47fb-8fdb-51e96840c0c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:05 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.281 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d9eb8be-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.283 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:05 np0005548915 kernel: tap4d9eb8be-70: left promiscuous mode
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.304 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:05 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.304 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[af636a8d-5811-4bc1-9d3b-03994e3d5ab0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:05 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.318 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[05cd4da6-296f-4ed6-a12b-a8c9529d808a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:05 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.319 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[5651cf36-9559-4b54-9d6c-f643c54caa32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:05 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.341 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9b841453-aa53-489d-a8b8-10c4e63c5493]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 401700, 'reachable_time': 24644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264228, 'error': None, 'target': 'ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:05 np0005548915 systemd[1]: run-netns-ovnmeta\x2d4d9eb8be\x2d73ac\x2d4cfc\x2d8821\x2dfb41b5868957.mount: Deactivated successfully.
Dec  6 05:08:05 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.344 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 05:08:05 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.345 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[82e3f770-e51f-4ac8-8979-b496287d009f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.510 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v807: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 1023 B/s wr, 1 op/s
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.659 254824 INFO nova.virt.libvirt.driver [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Deleting instance files /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686_del#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.660 254824 INFO nova.virt.libvirt.driver [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Deletion of /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686_del complete#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.722 254824 INFO nova.compute.manager [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.723 254824 DEBUG oslo.service.loopingcall [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.724 254824 DEBUG nova.compute.manager [-] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 05:08:05 np0005548915 nova_compute[254819]: 2025-12-06 10:08:05.724 254824 DEBUG nova.network.neutron [-] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 05:08:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:06 np0005548915 podman[264231]: 2025-12-06 10:08:06.484503859 +0000 UTC m=+0.108032414 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 05:08:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:06 np0005548915 nova_compute[254819]: 2025-12-06 10:08:06.502 254824 DEBUG nova.network.neutron [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updated VIF entry in instance network info cache for port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:08:06 np0005548915 nova_compute[254819]: 2025-12-06 10:08:06.502 254824 DEBUG nova.network.neutron [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:08:06 np0005548915 nova_compute[254819]: 2025-12-06 10:08:06.744 254824 DEBUG nova.compute.manager [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-unplugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:08:06 np0005548915 nova_compute[254819]: 2025-12-06 10:08:06.744 254824 DEBUG oslo_concurrency.lockutils [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:06 np0005548915 nova_compute[254819]: 2025-12-06 10:08:06.745 254824 DEBUG oslo_concurrency.lockutils [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:06 np0005548915 nova_compute[254819]: 2025-12-06 10:08:06.745 254824 DEBUG oslo_concurrency.lockutils [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:06 np0005548915 nova_compute[254819]: 2025-12-06 10:08:06.745 254824 DEBUG nova.compute.manager [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] No waiting events found dispatching network-vif-unplugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:08:06 np0005548915 nova_compute[254819]: 2025-12-06 10:08:06.745 254824 DEBUG nova.compute.manager [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-unplugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 05:08:06 np0005548915 nova_compute[254819]: 2025-12-06 10:08:06.745 254824 DEBUG nova.compute.manager [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:08:06 np0005548915 nova_compute[254819]: 2025-12-06 10:08:06.746 254824 DEBUG oslo_concurrency.lockutils [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:06 np0005548915 nova_compute[254819]: 2025-12-06 10:08:06.746 254824 DEBUG oslo_concurrency.lockutils [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:06 np0005548915 nova_compute[254819]: 2025-12-06 10:08:06.746 254824 DEBUG oslo_concurrency.lockutils [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:06 np0005548915 nova_compute[254819]: 2025-12-06 10:08:06.746 254824 DEBUG nova.compute.manager [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] No waiting events found dispatching network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:08:06 np0005548915 nova_compute[254819]: 2025-12-06 10:08:06.746 254824 WARNING nova.compute.manager [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received unexpected event network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 05:08:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:07.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:08:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:07.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:08:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:07.274Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:08:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v808: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 1023 B/s wr, 1 op/s
Dec  6 05:08:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:08:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:08:07 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:08:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:08:07 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:08:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v809: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.2 KiB/s wr, 1 op/s
Dec  6 05:08:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:08:07 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:08:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:08:07 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:08:07 np0005548915 nova_compute[254819]: 2025-12-06 10:08:07.982 254824 DEBUG oslo_concurrency.lockutils [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:08:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:08:07 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:08:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:08:07 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:08:07 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:08:07 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:08:08 np0005548915 nova_compute[254819]: 2025-12-06 10:08:08.004 254824 DEBUG nova.network.neutron [-] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:08:08 np0005548915 nova_compute[254819]: 2025-12-06 10:08:08.044 254824 INFO nova.compute.manager [-] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Took 2.32 seconds to deallocate network for instance.#033[00m
Dec  6 05:08:08 np0005548915 nova_compute[254819]: 2025-12-06 10:08:08.075 254824 DEBUG nova.compute.manager [req-e30346fe-adb8-487b-b4a1-4f9156dff486 req-65ab3b57-c02c-4538-9e86-356080268524 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-deleted-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:08:08 np0005548915 nova_compute[254819]: 2025-12-06 10:08:08.158 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:08 np0005548915 nova_compute[254819]: 2025-12-06 10:08:08.159 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:08 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:08:08 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:08:08 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:08:08 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:08:08 np0005548915 nova_compute[254819]: 2025-12-06 10:08:08.220 254824 DEBUG oslo_concurrency.processutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:08:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:08 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:08 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:08 np0005548915 podman[264452]: 2025-12-06 10:08:08.536711663 +0000 UTC m=+0.050634897 container create 421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:08:08 np0005548915 systemd[1]: Started libpod-conmon-421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc.scope.
Dec  6 05:08:08 np0005548915 podman[264452]: 2025-12-06 10:08:08.51510121 +0000 UTC m=+0.029024494 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:08:08 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:08:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:08:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2093960658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:08:08 np0005548915 nova_compute[254819]: 2025-12-06 10:08:08.675 254824 DEBUG oslo_concurrency.processutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:08:08 np0005548915 nova_compute[254819]: 2025-12-06 10:08:08.683 254824 DEBUG nova.compute.provider_tree [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:08:08 np0005548915 podman[264452]: 2025-12-06 10:08:08.715102523 +0000 UTC m=+0.229025777 container init 421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:08:08 np0005548915 podman[264452]: 2025-12-06 10:08:08.723390126 +0000 UTC m=+0.237313360 container start 421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pare, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:08:08 np0005548915 exciting_pare[264469]: 167 167
Dec  6 05:08:08 np0005548915 systemd[1]: libpod-421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc.scope: Deactivated successfully.
Dec  6 05:08:08 np0005548915 podman[264452]: 2025-12-06 10:08:08.74351616 +0000 UTC m=+0.257439514 container attach 421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pare, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  6 05:08:08 np0005548915 podman[264452]: 2025-12-06 10:08:08.744598859 +0000 UTC m=+0.258522113 container died 421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pare, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  6 05:08:08 np0005548915 systemd[1]: var-lib-containers-storage-overlay-22e9c6ed9bc4d934fc25cd5d0d45183d4acb28a3bda17c8f6cb743c9e8d2015a-merged.mount: Deactivated successfully.
Dec  6 05:08:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:08.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:08:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:08.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:08:08 np0005548915 nova_compute[254819]: 2025-12-06 10:08:08.872 254824 DEBUG nova.scheduler.client.report [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:08:08 np0005548915 podman[264452]: 2025-12-06 10:08:08.893835203 +0000 UTC m=+0.407758447 container remove 421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 05:08:08 np0005548915 systemd[1]: libpod-conmon-421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc.scope: Deactivated successfully.
Dec  6 05:08:08 np0005548915 nova_compute[254819]: 2025-12-06 10:08:08.911 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:08:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:08:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:08 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:09.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:09 np0005548915 nova_compute[254819]: 2025-12-06 10:08:09.106 254824 INFO nova.scheduler.client.report [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 2ef62e22-52fc-44f3-9964-8dc9b3c20686#033[00m
Dec  6 05:08:09 np0005548915 podman[264495]: 2025-12-06 10:08:09.10915497 +0000 UTC m=+0.080503422 container create 9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  6 05:08:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:09.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:09 np0005548915 podman[264495]: 2025-12-06 10:08:09.057147138 +0000 UTC m=+0.028495610 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:08:09 np0005548915 systemd[1]: Started libpod-conmon-9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051.scope.
Dec  6 05:08:09 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:08:09 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57031c587cab0757f1bb17de45cc3e3869d1815b0ee325544d59078303554d3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:09 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57031c587cab0757f1bb17de45cc3e3869d1815b0ee325544d59078303554d3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:09 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57031c587cab0757f1bb17de45cc3e3869d1815b0ee325544d59078303554d3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:09 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57031c587cab0757f1bb17de45cc3e3869d1815b0ee325544d59078303554d3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:09 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57031c587cab0757f1bb17de45cc3e3869d1815b0ee325544d59078303554d3c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:09 np0005548915 podman[264495]: 2025-12-06 10:08:09.225544259 +0000 UTC m=+0.196892731 container init 9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  6 05:08:09 np0005548915 podman[264495]: 2025-12-06 10:08:09.233183264 +0000 UTC m=+0.204531706 container start 9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  6 05:08:09 np0005548915 podman[264495]: 2025-12-06 10:08:09.236562016 +0000 UTC m=+0.207910468 container attach 9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:08:09 np0005548915 nova_compute[254819]: 2025-12-06 10:08:09.464 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:09 np0005548915 brave_gates[264512]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:08:09 np0005548915 brave_gates[264512]: --> All data devices are unavailable
Dec  6 05:08:09 np0005548915 systemd[1]: libpod-9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051.scope: Deactivated successfully.
Dec  6 05:08:09 np0005548915 podman[264495]: 2025-12-06 10:08:09.588759454 +0000 UTC m=+0.560107916 container died 9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:08:09 np0005548915 systemd[1]: var-lib-containers-storage-overlay-57031c587cab0757f1bb17de45cc3e3869d1815b0ee325544d59078303554d3c-merged.mount: Deactivated successfully.
Dec  6 05:08:09 np0005548915 podman[264495]: 2025-12-06 10:08:09.720561028 +0000 UTC m=+0.691909480 container remove 9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  6 05:08:09 np0005548915 systemd[1]: libpod-conmon-9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051.scope: Deactivated successfully.
Dec  6 05:08:09 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v810: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 6.0 KiB/s wr, 33 op/s
Dec  6 05:08:10 np0005548915 nova_compute[254819]: 2025-12-06 10:08:10.173 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:10 np0005548915 podman[264634]: 2025-12-06 10:08:10.273391067 +0000 UTC m=+0.027643186 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:08:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:10 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280001070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:10 np0005548915 podman[264634]: 2025-12-06 10:08:10.381678517 +0000 UTC m=+0.135930586 container create 5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:08:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:10 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:10 np0005548915 systemd[1]: Started libpod-conmon-5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64.scope.
Dec  6 05:08:10 np0005548915 nova_compute[254819]: 2025-12-06 10:08:10.563 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:10 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:08:10 np0005548915 podman[264634]: 2025-12-06 10:08:10.687103324 +0000 UTC m=+0.441355383 container init 5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  6 05:08:10 np0005548915 podman[264634]: 2025-12-06 10:08:10.697303339 +0000 UTC m=+0.451555378 container start 5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:08:10 np0005548915 podman[264634]: 2025-12-06 10:08:10.70105185 +0000 UTC m=+0.455303889 container attach 5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  6 05:08:10 np0005548915 cranky_hermann[264652]: 167 167
Dec  6 05:08:10 np0005548915 systemd[1]: libpod-5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64.scope: Deactivated successfully.
Dec  6 05:08:10 np0005548915 podman[264634]: 2025-12-06 10:08:10.703564498 +0000 UTC m=+0.457816567 container died 5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:08:10 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f8b4937f64b7e19e49e1dfe4e879b86151bc905e579014dc530f3ba0fc51b16d-merged.mount: Deactivated successfully.
Dec  6 05:08:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:10] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec  6 05:08:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:10] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec  6 05:08:10 np0005548915 podman[264634]: 2025-12-06 10:08:10.924150187 +0000 UTC m=+0.678402226 container remove 5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:08:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:10 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:10 np0005548915 systemd[1]: libpod-conmon-5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64.scope: Deactivated successfully.
Dec  6 05:08:10 np0005548915 podman[264654]: 2025-12-06 10:08:10.997822354 +0000 UTC m=+0.414201182 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec  6 05:08:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:11.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:11.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:11 np0005548915 podman[264698]: 2025-12-06 10:08:11.181334233 +0000 UTC m=+0.085179339 container create 16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:08:11 np0005548915 podman[264698]: 2025-12-06 10:08:11.128049726 +0000 UTC m=+0.031894852 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:08:11 np0005548915 systemd[1]: Started libpod-conmon-16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b.scope.
Dec  6 05:08:11 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:08:11 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74efe5d16a326fa79bcf23f81f6c6a5e37029b7baf856f9dd9ad2f1026d7b22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:11 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74efe5d16a326fa79bcf23f81f6c6a5e37029b7baf856f9dd9ad2f1026d7b22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:11 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74efe5d16a326fa79bcf23f81f6c6a5e37029b7baf856f9dd9ad2f1026d7b22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:11 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74efe5d16a326fa79bcf23f81f6c6a5e37029b7baf856f9dd9ad2f1026d7b22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:11 np0005548915 podman[264698]: 2025-12-06 10:08:11.283294652 +0000 UTC m=+0.187139758 container init 16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  6 05:08:11 np0005548915 podman[264698]: 2025-12-06 10:08:11.292699857 +0000 UTC m=+0.196544963 container start 16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec  6 05:08:11 np0005548915 podman[264698]: 2025-12-06 10:08:11.323860927 +0000 UTC m=+0.227706073 container attach 16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]: {
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:    "1": [
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:        {
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:            "devices": [
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:                "/dev/loop3"
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:            ],
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:            "lv_name": "ceph_lv0",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:            "lv_size": "21470642176",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:            "name": "ceph_lv0",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:            "tags": {
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:                "ceph.cluster_name": "ceph",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:                "ceph.crush_device_class": "",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:                "ceph.encrypted": "0",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:                "ceph.osd_id": "1",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:                "ceph.type": "block",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:                "ceph.vdo": "0",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:                "ceph.with_tpm": "0"
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:            },
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:            "type": "block",
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:            "vg_name": "ceph_vg0"
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:        }
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]:    ]
Dec  6 05:08:11 np0005548915 xenodochial_spence[264714]: }
Dec  6 05:08:11 np0005548915 systemd[1]: libpod-16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b.scope: Deactivated successfully.
Dec  6 05:08:11 np0005548915 podman[264725]: 2025-12-06 10:08:11.651725459 +0000 UTC m=+0.025719535 container died 16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Dec  6 05:08:11 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f74efe5d16a326fa79bcf23f81f6c6a5e37029b7baf856f9dd9ad2f1026d7b22-merged.mount: Deactivated successfully.
Dec  6 05:08:11 np0005548915 podman[264725]: 2025-12-06 10:08:11.850060667 +0000 UTC m=+0.224054723 container remove 16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:08:11 np0005548915 systemd[1]: libpod-conmon-16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b.scope: Deactivated successfully.
Dec  6 05:08:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v811: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 6.0 KiB/s wr, 33 op/s
Dec  6 05:08:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:12 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:12 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:12 np0005548915 podman[264858]: 2025-12-06 10:08:12.557888635 +0000 UTC m=+0.024572674 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:08:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:08:12 np0005548915 podman[264858]: 2025-12-06 10:08:12.919809636 +0000 UTC m=+0.386493655 container create 98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_volhard, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:08:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:12 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280001070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:12 np0005548915 systemd[1]: Started libpod-conmon-98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75.scope.
Dec  6 05:08:13 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:08:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:13.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:13 np0005548915 podman[264858]: 2025-12-06 10:08:13.020382968 +0000 UTC m=+0.487067007 container init 98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:08:13 np0005548915 podman[264858]: 2025-12-06 10:08:13.027009097 +0000 UTC m=+0.493693116 container start 98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_volhard, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:08:13 np0005548915 podman[264858]: 2025-12-06 10:08:13.0308283 +0000 UTC m=+0.497512319 container attach 98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  6 05:08:13 np0005548915 nervous_volhard[264876]: 167 167
Dec  6 05:08:13 np0005548915 systemd[1]: libpod-98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75.scope: Deactivated successfully.
Dec  6 05:08:13 np0005548915 podman[264858]: 2025-12-06 10:08:13.032931687 +0000 UTC m=+0.499615706 container died 98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Dec  6 05:08:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:13.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:13 np0005548915 systemd[1]: var-lib-containers-storage-overlay-95a6b7862ca183b57e2888201de853efa44462ccfd3479c4892ac80751aebdce-merged.mount: Deactivated successfully.
Dec  6 05:08:13 np0005548915 podman[264858]: 2025-12-06 10:08:13.441318531 +0000 UTC m=+0.908002550 container remove 98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  6 05:08:13 np0005548915 systemd[1]: libpod-conmon-98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75.scope: Deactivated successfully.
Dec  6 05:08:13 np0005548915 podman[264904]: 2025-12-06 10:08:13.662740232 +0000 UTC m=+0.100122091 container create 432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 05:08:13 np0005548915 podman[264904]: 2025-12-06 10:08:13.588673264 +0000 UTC m=+0.026055163 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:08:13 np0005548915 systemd[1]: Started libpod-conmon-432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9.scope.
Dec  6 05:08:13 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:08:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3af1b479af6573b073fea53032f56f695a422467639dc0961447056176a733/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3af1b479af6573b073fea53032f56f695a422467639dc0961447056176a733/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3af1b479af6573b073fea53032f56f695a422467639dc0961447056176a733/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3af1b479af6573b073fea53032f56f695a422467639dc0961447056176a733/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v812: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.0 KiB/s wr, 32 op/s
Dec  6 05:08:13 np0005548915 podman[264904]: 2025-12-06 10:08:13.923369951 +0000 UTC m=+0.360751900 container init 432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:08:13 np0005548915 podman[264904]: 2025-12-06 10:08:13.937512352 +0000 UTC m=+0.374894211 container start 432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:08:13 np0005548915 podman[264904]: 2025-12-06 10:08:13.941136929 +0000 UTC m=+0.378518808 container attach 432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:08:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:14 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280001070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:14 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280001070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:14 np0005548915 lvm[264995]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:08:14 np0005548915 lvm[264995]: VG ceph_vg0 finished
Dec  6 05:08:14 np0005548915 vigorous_perlman[264921]: {}
Dec  6 05:08:14 np0005548915 systemd[1]: libpod-432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9.scope: Deactivated successfully.
Dec  6 05:08:14 np0005548915 systemd[1]: libpod-432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9.scope: Consumed 1.140s CPU time.
Dec  6 05:08:14 np0005548915 podman[264904]: 2025-12-06 10:08:14.679543723 +0000 UTC m=+1.116925642 container died 432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:08:14 np0005548915 systemd[1]: var-lib-containers-storage-overlay-0a3af1b479af6573b073fea53032f56f695a422467639dc0961447056176a733-merged.mount: Deactivated successfully.
Dec  6 05:08:14 np0005548915 podman[264904]: 2025-12-06 10:08:14.910537002 +0000 UTC m=+1.347918861 container remove 432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:08:14 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:14.916 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:08:14 np0005548915 nova_compute[254819]: 2025-12-06 10:08:14.917 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:14 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:14.919 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 05:08:14 np0005548915 systemd[1]: libpod-conmon-432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9.scope: Deactivated successfully.
Dec  6 05:08:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:14 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:08:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:08:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:15.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:08:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:08:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:08:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:15.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:15 np0005548915 nova_compute[254819]: 2025-12-06 10:08:15.174 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:08:15 np0005548915 nova_compute[254819]: 2025-12-06 10:08:15.565 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:15 np0005548915 nova_compute[254819]: 2025-12-06 10:08:15.691 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:15 np0005548915 nova_compute[254819]: 2025-12-06 10:08:15.810 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v813: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.0 KiB/s wr, 32 op/s
Dec  6 05:08:16 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:08:16 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:08:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:16 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:16 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:16 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:17.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:08:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:17.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:08:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:17.275Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:08:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:17.275Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:08:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:17.277Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:08:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:08:17 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v814: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.0 KiB/s wr, 32 op/s
Dec  6 05:08:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:18 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:18 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:18.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:08:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:18 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:19.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:19.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:19 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v815: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Dec  6 05:08:20 np0005548915 nova_compute[254819]: 2025-12-06 10:08:20.148 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765015685.1472943, 2ef62e22-52fc-44f3-9964-8dc9b3c20686 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:08:20 np0005548915 nova_compute[254819]: 2025-12-06 10:08:20.149 254824 INFO nova.compute.manager [-] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] VM Stopped (Lifecycle Event)#033[00m
Dec  6 05:08:20 np0005548915 nova_compute[254819]: 2025-12-06 10:08:20.231 254824 DEBUG nova.compute.manager [None req-609a1ee6-6c9e-4245-8c64-7e88cf684358 - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:08:20 np0005548915 nova_compute[254819]: 2025-12-06 10:08:20.231 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:20 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:20 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:20 np0005548915 nova_compute[254819]: 2025-12-06 10:08:20.567 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:20] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec  6 05:08:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:20] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec  6 05:08:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:20 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:08:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:21.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:08:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:21.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:21 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v816: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:08:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:22 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:22 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:08:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:22 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:23.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:08:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:23.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:08:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:08:23
Dec  6 05:08:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:08:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:08:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'vms', 'volumes', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', '.nfs']
Dec  6 05:08:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:08:23 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v817: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:08:23 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:23.921 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:08:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:08:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:08:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:08:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:08:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:24 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:08:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:08:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:24 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:24 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:25.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:25.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:25 np0005548915 nova_compute[254819]: 2025-12-06 10:08:25.261 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:25 np0005548915 nova_compute[254819]: 2025-12-06 10:08:25.570 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:25 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v818: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:08:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:26 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32800041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:26 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec  6 05:08:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:26 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:08:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:27.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:08:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:27.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:27.278Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:08:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:08:27 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v819: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:08:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:28 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:28 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32800041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:28.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:08:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:28 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:29.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:29.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:29 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v820: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:08:30 np0005548915 nova_compute[254819]: 2025-12-06 10:08:30.284 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:30 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:30 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:30 np0005548915 nova_compute[254819]: 2025-12-06 10:08:30.573 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:08:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:08:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:30 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32800041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:31.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:31.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:31 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v821: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.048 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.049 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.069 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.146 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.146 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.155 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.156 254824 INFO nova.compute.claims [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  6 05:08:32 np0005548915 podman[265079]: 2025-12-06 10:08:32.226769477 +0000 UTC m=+0.077152951 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.272 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:08:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:32 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:32 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:08:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1132300413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.712 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.719 254824 DEBUG nova.compute.provider_tree [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.739 254824 DEBUG nova.scheduler.client.report [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.771 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.771 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.842 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.843 254824 DEBUG nova.network.neutron [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.872 254824 INFO nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 05:08:32 np0005548915 nova_compute[254819]: 2025-12-06 10:08:32.893 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 05:08:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:08:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:32 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.005 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.006 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.006 254824 INFO nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Creating image(s)#033[00m
Dec  6 05:08:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:08:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:33.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.048 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.092 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.130 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.135 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.160 254824 DEBUG nova.policy [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 05:08:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:08:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:33.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.212 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.213 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.213 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.213 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.246 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.252 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 112440c2-8dcc-4a19-9d83-5489df97079a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.555 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 112440c2-8dcc-4a19-9d83-5489df97079a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.653 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.787 254824 DEBUG nova.objects.instance [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 112440c2-8dcc-4a19-9d83-5489df97079a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.807 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.808 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Ensure instance console log exists: /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.808 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.809 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:33 np0005548915 nova_compute[254819]: 2025-12-06 10:08:33.809 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:33 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v822: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:08:34 np0005548915 nova_compute[254819]: 2025-12-06 10:08:34.088 254824 DEBUG nova.network.neutron [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Successfully created port: 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 05:08:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:34 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32800041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:34 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:34 np0005548915 nova_compute[254819]: 2025-12-06 10:08:34.715 254824 DEBUG nova.network.neutron [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Successfully updated port: 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 05:08:34 np0005548915 nova_compute[254819]: 2025-12-06 10:08:34.728 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:08:34 np0005548915 nova_compute[254819]: 2025-12-06 10:08:34.728 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:08:34 np0005548915 nova_compute[254819]: 2025-12-06 10:08:34.728 254824 DEBUG nova.network.neutron [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 05:08:34 np0005548915 nova_compute[254819]: 2025-12-06 10:08:34.804 254824 DEBUG nova.compute.manager [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-changed-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:08:34 np0005548915 nova_compute[254819]: 2025-12-06 10:08:34.804 254824 DEBUG nova.compute.manager [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Refreshing instance network info cache due to event network-changed-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:08:34 np0005548915 nova_compute[254819]: 2025-12-06 10:08:34.805 254824 DEBUG oslo_concurrency.lockutils [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:08:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:34 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:35 np0005548915 nova_compute[254819]: 2025-12-06 10:08:35.021 254824 DEBUG nova.network.neutron [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 05:08:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:35.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:35.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:35 np0005548915 nova_compute[254819]: 2025-12-06 10:08:35.287 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:35 np0005548915 nova_compute[254819]: 2025-12-06 10:08:35.575 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:35 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v823: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.117 254824 DEBUG nova.network.neutron [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updating instance_info_cache with network_info: [{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.140 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.141 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Instance network_info: |[{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.142 254824 DEBUG oslo_concurrency.lockutils [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.142 254824 DEBUG nova.network.neutron [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Refreshing network info cache for port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.147 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Start _get_guest_xml network_info=[{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.152 254824 WARNING nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.158 254824 DEBUG nova.virt.libvirt.host [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.159 254824 DEBUG nova.virt.libvirt.host [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.162 254824 DEBUG nova.virt.libvirt.host [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.163 254824 DEBUG nova.virt.libvirt.host [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.164 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.164 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.165 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.166 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.166 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.167 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.167 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.167 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.168 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.168 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.169 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.169 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.176 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:08:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:36 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:36 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:08:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:08:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/198037829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.693 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.725 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:08:36 np0005548915 nova_compute[254819]: 2025-12-06 10:08:36.731 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:08:36 np0005548915 kernel: ganesha.nfsd[263832]: segfault at 50 ip 00007f335a2f032e sp 00007f3312ffc210 error 4 in libntirpc.so.5.8[7f335a2d5000+2c000] likely on CPU 4 (core 0, socket 4)
Dec  6 05:08:36 np0005548915 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  6 05:08:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:36 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c70 fd 39 proxy ignored for local
Dec  6 05:08:36 np0005548915 systemd[1]: Started Process Core Dump (PID 265354/UID 0).
Dec  6 05:08:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:37.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:37 np0005548915 podman[265355]: 2025-12-06 10:08:37.114728505 +0000 UTC m=+0.107203952 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller)
Dec  6 05:08:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:37.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:08:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332401644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.242 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.244 254824 DEBUG nova.virt.libvirt.vif [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:08:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-609462386',display_name='tempest-TestNetworkBasicOps-server-609462386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-609462386',id=4,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEB71wqy4Vx0ThrIuit7bIMfXK6YLKUBZN1lhipBZkl9t8qtDE6kg/NsSamOzTH/a+zjpG46+Awuo3QHJ780QH0C6lo/2uOHg18NVMuqh+pfDOXzTKYCxhRCIxLSg0ck4w==',key_name='tempest-TestNetworkBasicOps-1991615071',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-ykqs2wqw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:08:32Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=112440c2-8dcc-4a19-9d83-5489df97079a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.244 254824 DEBUG nova.network.os_vif_util [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.245 254824 DEBUG nova.network.os_vif_util [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:37:0e,bridge_name='br-int',has_traffic_filtering=True,id=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae,network=Network(dccd9941-4f3e-4086-b9cd-651d8e99e8ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d0118f7-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.246 254824 DEBUG nova.objects.instance [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 112440c2-8dcc-4a19-9d83-5489df97079a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.262 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] End _get_guest_xml xml=<domain type="kvm">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  <uuid>112440c2-8dcc-4a19-9d83-5489df97079a</uuid>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  <name>instance-00000004</name>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  <memory>131072</memory>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  <vcpu>1</vcpu>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <nova:name>tempest-TestNetworkBasicOps-server-609462386</nova:name>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <nova:creationTime>2025-12-06 10:08:36</nova:creationTime>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <nova:flavor name="m1.nano">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <nova:memory>128</nova:memory>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <nova:disk>1</nova:disk>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <nova:swap>0</nova:swap>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <nova:vcpus>1</nova:vcpus>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      </nova:flavor>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <nova:owner>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      </nova:owner>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <nova:ports>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <nova:port uuid="2d0118f7-94f6-43f6-a67f-28e0faf9c3ae">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        </nova:port>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      </nova:ports>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    </nova:instance>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  <sysinfo type="smbios">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <entry name="manufacturer">RDO</entry>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <entry name="product">OpenStack Compute</entry>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <entry name="serial">112440c2-8dcc-4a19-9d83-5489df97079a</entry>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <entry name="uuid">112440c2-8dcc-4a19-9d83-5489df97079a</entry>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <entry name="family">Virtual Machine</entry>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <boot dev="hd"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <smbios mode="sysinfo"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <vmcoreinfo/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  <clock offset="utc">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <timer name="hpet" present="no"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  <cpu mode="host-model" match="exact">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <disk type="network" device="disk">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/112440c2-8dcc-4a19-9d83-5489df97079a_disk">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <target dev="vda" bus="virtio"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <disk type="network" device="cdrom">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/112440c2-8dcc-4a19-9d83-5489df97079a_disk.config">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <target dev="sda" bus="sata"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <interface type="ethernet">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <mac address="fa:16:3e:b4:37:0e"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <mtu size="1442"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <target dev="tap2d0118f7-94"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <serial type="pty">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <log file="/var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/console.log" append="off"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <input type="tablet" bus="usb"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <rng model="virtio">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <backend model="random">/dev/urandom</backend>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <controller type="usb" index="0"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    <memballoon model="virtio">
Dec  6 05:08:37 np0005548915 nova_compute[254819]:      <stats period="10"/>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:08:37 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:08:37 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:08:37 np0005548915 nova_compute[254819]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.264 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Preparing to wait for external event network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.264 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.264 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.265 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.265 254824 DEBUG nova.virt.libvirt.vif [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:08:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-609462386',display_name='tempest-TestNetworkBasicOps-server-609462386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-609462386',id=4,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEB71wqy4Vx0ThrIuit7bIMfXK6YLKUBZN1lhipBZkl9t8qtDE6kg/NsSamOzTH/a+zjpG46+Awuo3QHJ780QH0C6lo/2uOHg18NVMuqh+pfDOXzTKYCxhRCIxLSg0ck4w==',key_name='tempest-TestNetworkBasicOps-1991615071',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-ykqs2wqw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:08:32Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=112440c2-8dcc-4a19-9d83-5489df97079a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.265 254824 DEBUG nova.network.os_vif_util [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.266 254824 DEBUG nova.network.os_vif_util [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:37:0e,bridge_name='br-int',has_traffic_filtering=True,id=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae,network=Network(dccd9941-4f3e-4086-b9cd-651d8e99e8ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d0118f7-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.266 254824 DEBUG os_vif [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:37:0e,bridge_name='br-int',has_traffic_filtering=True,id=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae,network=Network(dccd9941-4f3e-4086-b9cd-651d8e99e8ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d0118f7-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.267 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.267 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.268 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.271 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.271 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d0118f7-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.271 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d0118f7-94, col_values=(('external_ids', {'iface-id': '2d0118f7-94f6-43f6-a67f-28e0faf9c3ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:37:0e', 'vm-uuid': '112440c2-8dcc-4a19-9d83-5489df97079a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.273 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:37 np0005548915 NetworkManager[48882]: <info>  [1765015717.2739] manager: (tap2d0118f7-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.275 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:08:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:37.279Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:08:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:37.279Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.284 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.285 254824 INFO os_vif [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:37:0e,bridge_name='br-int',has_traffic_filtering=True,id=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae,network=Network(dccd9941-4f3e-4086-b9cd-651d8e99e8ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d0118f7-94')#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.343 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.343 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.343 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:b4:37:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.344 254824 INFO nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Using config drive#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.369 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.631 254824 DEBUG nova.network.neutron [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updated VIF entry in instance network info cache for port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.632 254824 DEBUG nova.network.neutron [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updating instance_info_cache with network_info: [{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.647 254824 DEBUG oslo_concurrency.lockutils [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.684 254824 INFO nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Creating config drive at /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/disk.config#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.688 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpprc8iw27 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.824 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpprc8iw27" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.853 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:08:37 np0005548915 nova_compute[254819]: 2025-12-06 10:08:37.858 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/disk.config 112440c2-8dcc-4a19-9d83-5489df97079a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:08:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:08:37 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v824: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:08:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:38.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:08:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:39.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:08:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:08:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:39.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:39 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v825: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:08:39 np0005548915 systemd-coredump[265356]: Process 262270 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 55:#012#0  0x00007f335a2f032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  6 05:08:39 np0005548915 nova_compute[254819]: 2025-12-06 10:08:39.958 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/disk.config 112440c2-8dcc-4a19-9d83-5489df97079a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:08:39 np0005548915 nova_compute[254819]: 2025-12-06 10:08:39.960 254824 INFO nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Deleting local config drive /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/disk.config because it was imported into RBD.#033[00m
Dec  6 05:08:40 np0005548915 kernel: tap2d0118f7-94: entered promiscuous mode
Dec  6 05:08:40 np0005548915 systemd[1]: systemd-coredump@9-265354-0.service: Deactivated successfully.
Dec  6 05:08:40 np0005548915 NetworkManager[48882]: <info>  [1765015720.0541] manager: (tap2d0118f7-94): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Dec  6 05:08:40 np0005548915 systemd[1]: systemd-coredump@9-265354-0.service: Consumed 1.214s CPU time.
Dec  6 05:08:40 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:40Z|00055|binding|INFO|Claiming lport 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae for this chassis.
Dec  6 05:08:40 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:40Z|00056|binding|INFO|2d0118f7-94f6-43f6-a67f-28e0faf9c3ae: Claiming fa:16:3e:b4:37:0e 10.100.0.5
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.101 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.115 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:37:0e 10.100.0.5'], port_security=['fa:16:3e:b4:37:0e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '112440c2-8dcc-4a19-9d83-5489df97079a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3027a471-10b5-4a61-b09a-0f0e6072fde1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=611cd505-2a02-4d45-a906-bd97d1447953, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.116 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae in datapath dccd9941-4f3e-4086-b9cd-651d8e99e8ec bound to our chassis#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.118 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dccd9941-4f3e-4086-b9cd-651d8e99e8ec#033[00m
Dec  6 05:08:40 np0005548915 systemd-udevd[265471]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:08:40 np0005548915 systemd-machined[216202]: New machine qemu-3-instance-00000004.
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.135 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[761d5912-866c-498b-a211-e5a6727da3cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.137 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdccd9941-41 in ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.139 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdccd9941-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.139 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[36e6d1b0-6033-4712-9612-34cb9fa9ea3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.140 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8303b8e0-9d22-4a32-aa2c-6fd960c961a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 NetworkManager[48882]: <info>  [1765015720.1462] device (tap2d0118f7-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 05:08:40 np0005548915 NetworkManager[48882]: <info>  [1765015720.1472] device (tap2d0118f7-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.151 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f91b5c-2592-49b5-9437-bcb28e9b7fa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 podman[265463]: 2025-12-06 10:08:40.157384788 +0000 UTC m=+0.041415457 container died f2727a14c8c776c3cd7e91838d6e5e786e1c034f81a93b6d591f7a9fc5c736a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:08:40 np0005548915 systemd[1]: Started Virtual Machine qemu-3-instance-00000004.
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.179 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0efe9734-5e51-4591-88da-98170b446a4a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.186 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:40 np0005548915 systemd[1]: var-lib-containers-storage-overlay-4a06d1e4ef00f96bb3b2a4a87962e3ae00f248f55a7d8371c9603028aaf9dae7-merged.mount: Deactivated successfully.
Dec  6 05:08:40 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:40Z|00057|binding|INFO|Setting lport 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae ovn-installed in OVS
Dec  6 05:08:40 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:40Z|00058|binding|INFO|Setting lport 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae up in Southbound
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.191 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:40 np0005548915 podman[265463]: 2025-12-06 10:08:40.21157909 +0000 UTC m=+0.095609749 container remove f2727a14c8c776c3cd7e91838d6e5e786e1c034f81a93b6d591f7a9fc5c736a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.214 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[ea429b55-838d-4006-b764-9193269bfaec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.219 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0f9355-5c76-4fda-b6fc-c4ff649e0112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 NetworkManager[48882]: <info>  [1765015720.2207] manager: (tapdccd9941-40): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Dec  6 05:08:40 np0005548915 systemd-udevd[265479]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.261 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[b635b53a-b521-47a7-a4e7-73d2d60d7da1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.265 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[873e958a-7869-47ad-af39-8b22d1686264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 NetworkManager[48882]: <info>  [1765015720.2935] device (tapdccd9941-40): carrier: link connected
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.297 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[319e5e10-1cfa-4116-9e24-e189a3835c1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.317 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[4e7022e6-b44e-4606-95d7-9af060abd501]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdccd9941-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:b1:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409077, 'reachable_time': 32278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265529, 'error': None, 'target': 'ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.333 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[b84d9611-a72f-4983-a638-c93825fe4c27]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:b1b9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409077, 'tstamp': 409077}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265540, 'error': None, 'target': 'ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.354 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e96377be-f281-449c-b2cc-8c61b1c64c67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdccd9941-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:b1:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409077, 'reachable_time': 32278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265543, 'error': None, 'target': 'ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec  6 05:08:40 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.354s CPU time.
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.383 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[30ee7516-504a-4278-9291-d9883ec1611d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.424 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8a455c1f-ce0a-4702-839e-f1e206e965e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.425 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdccd9941-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.425 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.426 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdccd9941-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.427 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:40 np0005548915 NetworkManager[48882]: <info>  [1765015720.4280] manager: (tapdccd9941-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Dec  6 05:08:40 np0005548915 kernel: tapdccd9941-40: entered promiscuous mode
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.430 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdccd9941-40, col_values=(('external_ids', {'iface-id': '5c84c258-875b-4b17-864b-0a3a247ec558'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.431 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.432 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:40 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:40Z|00059|binding|INFO|Releasing lport 5c84c258-875b-4b17-864b-0a3a247ec558 from this chassis (sb_readonly=0)
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.433 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dccd9941-4f3e-4086-b9cd-651d8e99e8ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dccd9941-4f3e-4086-b9cd-651d8e99e8ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.434 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[471558ea-9691-42ea-96f0-20d061927c7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.435 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: global
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    log         /dev/log local0 debug
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    log-tag     haproxy-metadata-proxy-dccd9941-4f3e-4086-b9cd-651d8e99e8ec
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    user        root
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    group       root
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    maxconn     1024
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    pidfile     /var/lib/neutron/external/pids/dccd9941-4f3e-4086-b9cd-651d8e99e8ec.pid.haproxy
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    daemon
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: defaults
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    log global
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    mode http
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    option httplog
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    option dontlognull
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    option http-server-close
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    option forwardfor
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    retries                 3
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    timeout http-request    30s
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    timeout connect         30s
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    timeout client          32s
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    timeout server          32s
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    timeout http-keep-alive 30s
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: listen listener
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    bind 169.254.169.254:80
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]:    http-request add-header X-OVN-Network-ID dccd9941-4f3e-4086-b9cd-651d8e99e8ec
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 05:08:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.435 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'env', 'PROCESS_TAG=haproxy-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dccd9941-4f3e-4086-b9cd-651d8e99e8ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.446 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.578 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.773 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.773 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.774 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.774 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.774 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:08:40 np0005548915 podman[265617]: 2025-12-06 10:08:40.796108614 +0000 UTC m=+0.055524618 container create b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.796 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015720.7734509, 112440c2-8dcc-4a19-9d83-5489df97079a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.797 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] VM Started (Lifecycle Event)#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.818 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.823 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015720.7739744, 112440c2-8dcc-4a19-9d83-5489df97079a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.824 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] VM Paused (Lifecycle Event)#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.841 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.847 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:08:40 np0005548915 systemd[1]: Started libpod-conmon-b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9.scope.
Dec  6 05:08:40 np0005548915 podman[265617]: 2025-12-06 10:08:40.766295621 +0000 UTC m=+0.025711625 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  6 05:08:40 np0005548915 nova_compute[254819]: 2025-12-06 10:08:40.870 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:08:40 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:08:40 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb19a4954b41a05ededd94f9209e0e9572500e71415f9c5c428921ac41b73efd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:40] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:08:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:40] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:08:40 np0005548915 podman[265617]: 2025-12-06 10:08:40.903368488 +0000 UTC m=+0.162784502 container init b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 05:08:40 np0005548915 podman[265617]: 2025-12-06 10:08:40.909572135 +0000 UTC m=+0.168988129 container start b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:08:40 np0005548915 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [NOTICE]   (265656) : New worker (265658) forked
Dec  6 05:08:40 np0005548915 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [NOTICE]   (265656) : Loading success.
Dec  6 05:08:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:41.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:08:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:41.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:08:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:08:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219827938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.225 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.238 254824 DEBUG nova.compute.manager [req-0d8ab876-8f13-4fbf-8c51-db2005cbb24a req-336808ce-6499-4000-81e2-6d4a010b67de d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.238 254824 DEBUG oslo_concurrency.lockutils [req-0d8ab876-8f13-4fbf-8c51-db2005cbb24a req-336808ce-6499-4000-81e2-6d4a010b67de d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.238 254824 DEBUG oslo_concurrency.lockutils [req-0d8ab876-8f13-4fbf-8c51-db2005cbb24a req-336808ce-6499-4000-81e2-6d4a010b67de d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.239 254824 DEBUG oslo_concurrency.lockutils [req-0d8ab876-8f13-4fbf-8c51-db2005cbb24a req-336808ce-6499-4000-81e2-6d4a010b67de d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.239 254824 DEBUG nova.compute.manager [req-0d8ab876-8f13-4fbf-8c51-db2005cbb24a req-336808ce-6499-4000-81e2-6d4a010b67de d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Processing event network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.239 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.242 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015721.2422922, 112440c2-8dcc-4a19-9d83-5489df97079a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.242 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] VM Resumed (Lifecycle Event)#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.246 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.253 254824 INFO nova.virt.libvirt.driver [-] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Instance spawned successfully.#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.255 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.263 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.267 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.280 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.280 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.280 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.281 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.281 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.281 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.288 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.338 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.339 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.354 254824 INFO nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Took 8.35 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.356 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:08:41 np0005548915 podman[265669]: 2025-12-06 10:08:41.397961485 +0000 UTC m=+0.108354073 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.418 254824 INFO nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Took 9.29 seconds to build instance.#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.433 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.384s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.537 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.538 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4471MB free_disk=59.96752166748047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.538 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.539 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.826 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 112440c2-8dcc-4a19-9d83-5489df97079a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.827 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.828 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:08:41 np0005548915 nova_compute[254819]: 2025-12-06 10:08:41.865 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:08:41 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v826: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:08:42 np0005548915 nova_compute[254819]: 2025-12-06 10:08:42.310 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:08:42 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2139792515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:08:42 np0005548915 nova_compute[254819]: 2025-12-06 10:08:42.417 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:08:42 np0005548915 nova_compute[254819]: 2025-12-06 10:08:42.425 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:08:42 np0005548915 nova_compute[254819]: 2025-12-06 10:08:42.443 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:08:42 np0005548915 nova_compute[254819]: 2025-12-06 10:08:42.465 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:08:42 np0005548915 nova_compute[254819]: 2025-12-06 10:08:42.466 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:08:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:08:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:43.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:08:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:43.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.339 254824 DEBUG nova.compute.manager [req-5416705b-7e3a-4f64-bdbd-cf57d3f42dbc req-bad830cb-b182-4af1-8da4-870047e7f1c0 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.340 254824 DEBUG oslo_concurrency.lockutils [req-5416705b-7e3a-4f64-bdbd-cf57d3f42dbc req-bad830cb-b182-4af1-8da4-870047e7f1c0 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.341 254824 DEBUG oslo_concurrency.lockutils [req-5416705b-7e3a-4f64-bdbd-cf57d3f42dbc req-bad830cb-b182-4af1-8da4-870047e7f1c0 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.341 254824 DEBUG oslo_concurrency.lockutils [req-5416705b-7e3a-4f64-bdbd-cf57d3f42dbc req-bad830cb-b182-4af1-8da4-870047e7f1c0 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.341 254824 DEBUG nova.compute.manager [req-5416705b-7e3a-4f64-bdbd-cf57d3f42dbc req-bad830cb-b182-4af1-8da4-870047e7f1c0 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] No waiting events found dispatching network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.342 254824 WARNING nova.compute.manager [req-5416705b-7e3a-4f64-bdbd-cf57d3f42dbc req-bad830cb-b182-4af1-8da4-870047e7f1c0 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received unexpected event network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae for instance with vm_state active and task_state None.#033[00m
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.467 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.487 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.488 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.748 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:08:43 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v827: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.939 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.940 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.941 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 05:08:43 np0005548915 nova_compute[254819]: 2025-12-06 10:08:43.942 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 112440c2-8dcc-4a19-9d83-5489df97079a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:08:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100844 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:08:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:45.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:45.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:45 np0005548915 nova_compute[254819]: 2025-12-06 10:08:45.579 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:45 np0005548915 nova_compute[254819]: 2025-12-06 10:08:45.825 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updating instance_info_cache with network_info: [{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:08:45 np0005548915 nova_compute[254819]: 2025-12-06 10:08:45.847 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:08:45 np0005548915 nova_compute[254819]: 2025-12-06 10:08:45.848 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 05:08:45 np0005548915 nova_compute[254819]: 2025-12-06 10:08:45.849 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:08:45 np0005548915 nova_compute[254819]: 2025-12-06 10:08:45.850 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:08:45 np0005548915 nova_compute[254819]: 2025-12-06 10:08:45.850 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:08:45 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v828: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec  6 05:08:46 np0005548915 nova_compute[254819]: 2025-12-06 10:08:46.067 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:46 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:46Z|00060|binding|INFO|Releasing lport 5c84c258-875b-4b17-864b-0a3a247ec558 from this chassis (sb_readonly=0)
Dec  6 05:08:46 np0005548915 NetworkManager[48882]: <info>  [1765015726.0690] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Dec  6 05:08:46 np0005548915 NetworkManager[48882]: <info>  [1765015726.0701] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec  6 05:08:46 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:46Z|00061|binding|INFO|Releasing lport 5c84c258-875b-4b17-864b-0a3a247ec558 from this chassis (sb_readonly=0)
Dec  6 05:08:46 np0005548915 nova_compute[254819]: 2025-12-06 10:08:46.110 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:46 np0005548915 nova_compute[254819]: 2025-12-06 10:08:46.116 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:46 np0005548915 nova_compute[254819]: 2025-12-06 10:08:46.341 254824 DEBUG nova.compute.manager [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-changed-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:08:46 np0005548915 nova_compute[254819]: 2025-12-06 10:08:46.342 254824 DEBUG nova.compute.manager [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Refreshing instance network info cache due to event network-changed-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:08:46 np0005548915 nova_compute[254819]: 2025-12-06 10:08:46.342 254824 DEBUG oslo_concurrency.lockutils [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:08:46 np0005548915 nova_compute[254819]: 2025-12-06 10:08:46.343 254824 DEBUG oslo_concurrency.lockutils [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:08:46 np0005548915 nova_compute[254819]: 2025-12-06 10:08:46.343 254824 DEBUG nova.network.neutron [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Refreshing network info cache for port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:08:46 np0005548915 nova_compute[254819]: 2025-12-06 10:08:46.844 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:08:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:08:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:47.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:08:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:47.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:47.281Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:08:47 np0005548915 nova_compute[254819]: 2025-12-06 10:08:47.315 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v829: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec  6 05:08:48 np0005548915 nova_compute[254819]: 2025-12-06 10:08:48.360 254824 DEBUG nova.network.neutron [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updated VIF entry in instance network info cache for port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:08:48 np0005548915 nova_compute[254819]: 2025-12-06 10:08:48.362 254824 DEBUG nova.network.neutron [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updating instance_info_cache with network_info: [{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:08:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:48.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:08:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:08:49 np0005548915 nova_compute[254819]: 2025-12-06 10:08:49.020 254824 DEBUG oslo_concurrency.lockutils [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:08:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:08:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:49.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:08:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:08:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:49.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:08:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v830: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec  6 05:08:50 np0005548915 nova_compute[254819]: 2025-12-06 10:08:50.582 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:50 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 10.
Dec  6 05:08:50 np0005548915 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 05:08:50 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.354s CPU time.
Dec  6 05:08:50 np0005548915 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 05:08:50 np0005548915 podman[265770]: 2025-12-06 10:08:50.863643126 +0000 UTC m=+0.049119326 container create af69e9a47df8ecde800ecab5adbfc1ec516b668507faf977fed781c1bc7fd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 05:08:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:50] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:08:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:50] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:08:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/042dbc87b14c7e62417a7a4804c45e91c691e54e6f21f825478d88a0b2bd6aee/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/042dbc87b14c7e62417a7a4804c45e91c691e54e6f21f825478d88a0b2bd6aee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/042dbc87b14c7e62417a7a4804c45e91c691e54e6f21f825478d88a0b2bd6aee/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/042dbc87b14c7e62417a7a4804c45e91c691e54e6f21f825478d88a0b2bd6aee/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:08:50 np0005548915 podman[265770]: 2025-12-06 10:08:50.836881794 +0000 UTC m=+0.022357994 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:08:50 np0005548915 podman[265770]: 2025-12-06 10:08:50.940354284 +0000 UTC m=+0.125830504 container init af69e9a47df8ecde800ecab5adbfc1ec516b668507faf977fed781c1bc7fd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:08:50 np0005548915 podman[265770]: 2025-12-06 10:08:50.947684182 +0000 UTC m=+0.133160372 container start af69e9a47df8ecde800ecab5adbfc1ec516b668507faf977fed781c1bc7fd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:08:50 np0005548915 bash[265770]: af69e9a47df8ecde800ecab5adbfc1ec516b668507faf977fed781c1bc7fd62d
Dec  6 05:08:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:50 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  6 05:08:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:50 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  6 05:08:50 np0005548915 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 05:08:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:51 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  6 05:08:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:51 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  6 05:08:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:51 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  6 05:08:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:51 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  6 05:08:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:51 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  6 05:08:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:08:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:51.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:08:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:51 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 05:08:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:08:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:51.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:08:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v831: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  6 05:08:52 np0005548915 nova_compute[254819]: 2025-12-06 10:08:52.319 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:08:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:53.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:08:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:53.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:08:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:08:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v832: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 102 op/s
Dec  6 05:08:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:08:53 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:08:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:08:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:08:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:08:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:08:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:08:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:54.241 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:08:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:54.241 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:08:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:08:54.242 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:08:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:55.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:55 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:55Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b4:37:0e 10.100.0.5
Dec  6 05:08:55 np0005548915 ovn_controller[152417]: 2025-12-06T10:08:55Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b4:37:0e 10.100.0.5
Dec  6 05:08:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:55.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:55 np0005548915 nova_compute[254819]: 2025-12-06 10:08:55.584 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v833: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 2.0 MiB/s wr, 29 op/s
Dec  6 05:08:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:57.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:57 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:08:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:57 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:08:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:57.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:57.281Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:08:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:57.281Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:08:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:57.282Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:08:57 np0005548915 nova_compute[254819]: 2025-12-06 10:08:57.323 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:08:57 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v834: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 2.0 MiB/s wr, 29 op/s
Dec  6 05:08:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:58.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:08:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:08:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:59.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:08:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:08:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:59.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:08:59 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v835: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec  6 05:09:00 np0005548915 nova_compute[254819]: 2025-12-06 10:09:00.587 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:00 np0005548915 nova_compute[254819]: 2025-12-06 10:09:00.786 254824 INFO nova.compute.manager [None req-2db727e9-e55e-4849-be94-b6f7817bb971 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Get console output#033[00m
Dec  6 05:09:00 np0005548915 nova_compute[254819]: 2025-12-06 10:09:00.792 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 05:09:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:00] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec  6 05:09:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:00] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec  6 05:09:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:01.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:09:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:01.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:09:01 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v836: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec  6 05:09:02 np0005548915 nova_compute[254819]: 2025-12-06 10:09:02.327 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:02 np0005548915 podman[265866]: 2025-12-06 10:09:02.445923949 +0000 UTC m=+0.075573949 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  6 05:09:02 np0005548915 nova_compute[254819]: 2025-12-06 10:09:02.492 254824 DEBUG nova.compute.manager [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-changed-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:09:02 np0005548915 nova_compute[254819]: 2025-12-06 10:09:02.493 254824 DEBUG nova.compute.manager [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Refreshing instance network info cache due to event network-changed-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:09:02 np0005548915 nova_compute[254819]: 2025-12-06 10:09:02.493 254824 DEBUG oslo_concurrency.lockutils [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:09:02 np0005548915 nova_compute[254819]: 2025-12-06 10:09:02.493 254824 DEBUG oslo_concurrency.lockutils [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:09:02 np0005548915 nova_compute[254819]: 2025-12-06 10:09:02.494 254824 DEBUG nova.network.neutron [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Refreshing network info cache for port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:09:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:03.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  6 05:09:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 05:09:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:03.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:03 np0005548915 nova_compute[254819]: 2025-12-06 10:09:03.555 254824 DEBUG nova.network.neutron [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updated VIF entry in instance network info cache for port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:09:03 np0005548915 nova_compute[254819]: 2025-12-06 10:09:03.556 254824 DEBUG nova.network.neutron [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updating instance_info_cache with network_info: [{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:09:03 np0005548915 nova_compute[254819]: 2025-12-06 10:09:03.585 254824 DEBUG oslo_concurrency.lockutils [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:09:03 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v837: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec  6 05:09:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:09:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:04 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e24000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:04 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:04 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e00000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:05.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:05.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:05 np0005548915 nova_compute[254819]: 2025-12-06 10:09:05.590 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v838: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 107 KiB/s wr, 38 op/s
Dec  6 05:09:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:06 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:06 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100906 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 05:09:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:06 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:07.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:07.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:07.283Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:09:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:07.284Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:09:07 np0005548915 nova_compute[254819]: 2025-12-06 10:09:07.331 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:07 np0005548915 podman[265908]: 2025-12-06 10:09:07.457373848 +0000 UTC m=+0.087852660 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  6 05:09:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v839: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 107 KiB/s wr, 38 op/s
Dec  6 05:09:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:08 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e000016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:08 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e180013d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:08.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:09:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:09:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:09:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:08 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e200029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:09:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:09.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:09.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:09 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v840: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 113 KiB/s wr, 38 op/s
Dec  6 05:09:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:10 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:10 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e000016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:10 np0005548915 nova_compute[254819]: 2025-12-06 10:09:10.592 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:10] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Dec  6 05:09:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:10] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Dec  6 05:09:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:10 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18001ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:11.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:11.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v841: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 17 KiB/s wr, 1 op/s
Dec  6 05:09:12 np0005548915 nova_compute[254819]: 2025-12-06 10:09:12.335 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:12 np0005548915 podman[265963]: 2025-12-06 10:09:12.418653487 +0000 UTC m=+0.057129798 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 05:09:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:12 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e200029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:12 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:12 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e000016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:13.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:09:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:13.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:09:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v842: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Dec  6 05:09:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:09:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:14 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18001ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:14 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e200029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:14 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:15.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:09:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:15.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:09:15 np0005548915 nova_compute[254819]: 2025-12-06 10:09:15.593 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v843: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:09:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 05:09:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:09:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 05:09:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:16 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e00002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:16 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18001ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:09:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v844: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.0 MiB/s wr, 30 op/s
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:09:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:09:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:16 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:09:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:17.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:09:17 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:09:17 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:17 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:17 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:09:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:17.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:17.285Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:09:17 np0005548915 nova_compute[254819]: 2025-12-06 10:09:17.338 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:17 np0005548915 podman[266235]: 2025-12-06 10:09:17.463754513 +0000 UTC m=+0.057428275 container create f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 05:09:17 np0005548915 systemd[1]: Started libpod-conmon-f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e.scope.
Dec  6 05:09:17 np0005548915 podman[266235]: 2025-12-06 10:09:17.438850959 +0000 UTC m=+0.032524701 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:09:17 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:09:17 np0005548915 podman[266235]: 2025-12-06 10:09:17.575950922 +0000 UTC m=+0.169624674 container init f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:09:17 np0005548915 podman[266235]: 2025-12-06 10:09:17.592056133 +0000 UTC m=+0.185729845 container start f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_napier, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:09:17 np0005548915 podman[266235]: 2025-12-06 10:09:17.595447703 +0000 UTC m=+0.189121505 container attach f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:09:17 np0005548915 angry_napier[266253]: 167 167
Dec  6 05:09:17 np0005548915 systemd[1]: libpod-f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e.scope: Deactivated successfully.
Dec  6 05:09:17 np0005548915 podman[266235]: 2025-12-06 10:09:17.603343624 +0000 UTC m=+0.197017386 container died f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_napier, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:09:17 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7571f8847bc2b336cebb8e4b0b1fc704059f2058e7693785ee9bd13791e0c578-merged.mount: Deactivated successfully.
Dec  6 05:09:17 np0005548915 podman[266235]: 2025-12-06 10:09:17.657660547 +0000 UTC m=+0.251334309 container remove f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 05:09:17 np0005548915 systemd[1]: libpod-conmon-f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e.scope: Deactivated successfully.
Dec  6 05:09:17 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:17.796 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:09:17 np0005548915 nova_compute[254819]: 2025-12-06 10:09:17.799 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:17 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:17.799 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 05:09:17 np0005548915 podman[266277]: 2025-12-06 10:09:17.910442122 +0000 UTC m=+0.063126378 container create d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_rhodes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  6 05:09:17 np0005548915 systemd[1]: Started libpod-conmon-d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70.scope.
Dec  6 05:09:17 np0005548915 podman[266277]: 2025-12-06 10:09:17.883072391 +0000 UTC m=+0.035756647 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:09:17 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:09:18 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd497d5cff615d87194c873b4e4c547b4de1e8653f54a6f013293fb1b7e7bcfc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:18 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd497d5cff615d87194c873b4e4c547b4de1e8653f54a6f013293fb1b7e7bcfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:18 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd497d5cff615d87194c873b4e4c547b4de1e8653f54a6f013293fb1b7e7bcfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:18 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd497d5cff615d87194c873b4e4c547b4de1e8653f54a6f013293fb1b7e7bcfc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:18 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd497d5cff615d87194c873b4e4c547b4de1e8653f54a6f013293fb1b7e7bcfc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:18 np0005548915 podman[266277]: 2025-12-06 10:09:18.021046318 +0000 UTC m=+0.173730554 container init d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_rhodes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  6 05:09:18 np0005548915 podman[266277]: 2025-12-06 10:09:18.027527672 +0000 UTC m=+0.180211898 container start d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_rhodes, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  6 05:09:18 np0005548915 podman[266277]: 2025-12-06 10:09:18.031992781 +0000 UTC m=+0.184677017 container attach d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_rhodes, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:09:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:18 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:18 np0005548915 flamboyant_rhodes[266293]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:09:18 np0005548915 flamboyant_rhodes[266293]: --> All data devices are unavailable
Dec  6 05:09:18 np0005548915 systemd[1]: libpod-d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70.scope: Deactivated successfully.
Dec  6 05:09:18 np0005548915 podman[266277]: 2025-12-06 10:09:18.49217938 +0000 UTC m=+0.644863606 container died d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  6 05:09:18 np0005548915 systemd[1]: var-lib-containers-storage-overlay-dd497d5cff615d87194c873b4e4c547b4de1e8653f54a6f013293fb1b7e7bcfc-merged.mount: Deactivated successfully.
Dec  6 05:09:18 np0005548915 podman[266277]: 2025-12-06 10:09:18.54303803 +0000 UTC m=+0.695722266 container remove d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_rhodes, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  6 05:09:18 np0005548915 systemd[1]: libpod-conmon-d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70.scope: Deactivated successfully.
Dec  6 05:09:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:18 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e00002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v845: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Dec  6 05:09:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:18.863Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:09:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:18.863Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:09:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:18.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:09:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:18 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18003380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:09:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:19.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:19.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:19 np0005548915 podman[266414]: 2025-12-06 10:09:19.380451651 +0000 UTC m=+0.061209607 container create 82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shannon, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  6 05:09:19 np0005548915 systemd[1]: Started libpod-conmon-82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d.scope.
Dec  6 05:09:19 np0005548915 podman[266414]: 2025-12-06 10:09:19.361143795 +0000 UTC m=+0.041901771 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:09:19 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:09:19 np0005548915 podman[266414]: 2025-12-06 10:09:19.481979364 +0000 UTC m=+0.162737330 container init 82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  6 05:09:19 np0005548915 podman[266414]: 2025-12-06 10:09:19.494747075 +0000 UTC m=+0.175505061 container start 82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  6 05:09:19 np0005548915 podman[266414]: 2025-12-06 10:09:19.503245422 +0000 UTC m=+0.184003388 container attach 82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shannon, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  6 05:09:19 np0005548915 dreamy_shannon[266430]: 167 167
Dec  6 05:09:19 np0005548915 systemd[1]: libpod-82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d.scope: Deactivated successfully.
Dec  6 05:09:19 np0005548915 conmon[266430]: conmon 82fe9d465ef37788ca7a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d.scope/container/memory.events
Dec  6 05:09:19 np0005548915 podman[266414]: 2025-12-06 10:09:19.508832361 +0000 UTC m=+0.189590347 container died 82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shannon, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:09:19 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d30384813c6d1f79449a6b85afed6f70b869d2639df526d1ef9c3f7a97ec1749-merged.mount: Deactivated successfully.
Dec  6 05:09:19 np0005548915 podman[266414]: 2025-12-06 10:09:19.563586155 +0000 UTC m=+0.244344141 container remove 82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shannon, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:09:19 np0005548915 systemd[1]: libpod-conmon-82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d.scope: Deactivated successfully.
Dec  6 05:09:19 np0005548915 podman[266455]: 2025-12-06 10:09:19.786460792 +0000 UTC m=+0.068378859 container create 692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:09:19 np0005548915 systemd[1]: Started libpod-conmon-692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25.scope.
Dec  6 05:09:19 np0005548915 podman[266455]: 2025-12-06 10:09:19.761374402 +0000 UTC m=+0.043292479 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:09:19 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:09:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57e5cb8b54655ed78e5552bce28f8558596b166bcdb18ecaa9b1bc71e999b642/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57e5cb8b54655ed78e5552bce28f8558596b166bcdb18ecaa9b1bc71e999b642/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57e5cb8b54655ed78e5552bce28f8558596b166bcdb18ecaa9b1bc71e999b642/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57e5cb8b54655ed78e5552bce28f8558596b166bcdb18ecaa9b1bc71e999b642/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:19 np0005548915 podman[266455]: 2025-12-06 10:09:19.917264298 +0000 UTC m=+0.199182425 container init 692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:09:19 np0005548915 podman[266455]: 2025-12-06 10:09:19.931547759 +0000 UTC m=+0.213465806 container start 692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:09:19 np0005548915 podman[266455]: 2025-12-06 10:09:19.937654553 +0000 UTC m=+0.219572600 container attach 692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]: {
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:    "1": [
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:        {
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:            "devices": [
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:                "/dev/loop3"
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:            ],
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:            "lv_name": "ceph_lv0",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:            "lv_size": "21470642176",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:            "name": "ceph_lv0",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:            "tags": {
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:                "ceph.cluster_name": "ceph",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:                "ceph.crush_device_class": "",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:                "ceph.encrypted": "0",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:                "ceph.osd_id": "1",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:                "ceph.type": "block",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:                "ceph.vdo": "0",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:                "ceph.with_tpm": "0"
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:            },
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:            "type": "block",
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:            "vg_name": "ceph_vg0"
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:        }
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]:    ]
Dec  6 05:09:20 np0005548915 affectionate_hofstadter[266471]: }
Dec  6 05:09:20 np0005548915 systemd[1]: libpod-692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25.scope: Deactivated successfully.
Dec  6 05:09:20 np0005548915 podman[266455]: 2025-12-06 10:09:20.273454397 +0000 UTC m=+0.555372434 container died 692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  6 05:09:20 np0005548915 systemd[1]: var-lib-containers-storage-overlay-57e5cb8b54655ed78e5552bce28f8558596b166bcdb18ecaa9b1bc71e999b642-merged.mount: Deactivated successfully.
Dec  6 05:09:20 np0005548915 podman[266455]: 2025-12-06 10:09:20.319594171 +0000 UTC m=+0.601512208 container remove 692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec  6 05:09:20 np0005548915 systemd[1]: libpod-conmon-692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25.scope: Deactivated successfully.
Dec  6 05:09:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:20 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:20 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:20 np0005548915 nova_compute[254819]: 2025-12-06 10:09:20.596 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v846: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Dec  6 05:09:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:20] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Dec  6 05:09:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:20] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Dec  6 05:09:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:20 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e00002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:21 np0005548915 podman[266584]: 2025-12-06 10:09:21.090079233 +0000 UTC m=+0.049061133 container create 7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:09:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:09:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:21.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:09:21 np0005548915 systemd[1]: Started libpod-conmon-7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4.scope.
Dec  6 05:09:21 np0005548915 podman[266584]: 2025-12-06 10:09:21.068714102 +0000 UTC m=+0.027696042 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:09:21 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:09:21 np0005548915 podman[266584]: 2025-12-06 10:09:21.194785111 +0000 UTC m=+0.153767071 container init 7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  6 05:09:21 np0005548915 podman[266584]: 2025-12-06 10:09:21.20220486 +0000 UTC m=+0.161186770 container start 7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_banzai, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:09:21 np0005548915 podman[266584]: 2025-12-06 10:09:21.206089703 +0000 UTC m=+0.165071653 container attach 7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:09:21 np0005548915 distracted_banzai[266601]: 167 167
Dec  6 05:09:21 np0005548915 systemd[1]: libpod-7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4.scope: Deactivated successfully.
Dec  6 05:09:21 np0005548915 podman[266584]: 2025-12-06 10:09:21.210237764 +0000 UTC m=+0.169219664 container died 7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:09:21 np0005548915 systemd[1]: var-lib-containers-storage-overlay-2629bcebe6ab5ae7a4096184b891d164b52b8863b31a2259ea4b879e097226cc-merged.mount: Deactivated successfully.
Dec  6 05:09:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:21.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:21 np0005548915 podman[266584]: 2025-12-06 10:09:21.254783565 +0000 UTC m=+0.213765505 container remove 7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_banzai, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  6 05:09:21 np0005548915 systemd[1]: libpod-conmon-7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4.scope: Deactivated successfully.
Dec  6 05:09:21 np0005548915 podman[266624]: 2025-12-06 10:09:21.483475806 +0000 UTC m=+0.064682179 container create 77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ptolemy, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:09:21 np0005548915 systemd[1]: Started libpod-conmon-77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221.scope.
Dec  6 05:09:21 np0005548915 podman[266624]: 2025-12-06 10:09:21.463138293 +0000 UTC m=+0.044344686 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:09:21 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:09:21 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7deb981d59206cf91756416fcad763f204b1a7c8aa45cc6124f9f520051166b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:21 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7deb981d59206cf91756416fcad763f204b1a7c8aa45cc6124f9f520051166b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:21 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7deb981d59206cf91756416fcad763f204b1a7c8aa45cc6124f9f520051166b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:21 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7deb981d59206cf91756416fcad763f204b1a7c8aa45cc6124f9f520051166b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:21 np0005548915 podman[266624]: 2025-12-06 10:09:21.581350702 +0000 UTC m=+0.162557175 container init 77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:09:21 np0005548915 podman[266624]: 2025-12-06 10:09:21.591926095 +0000 UTC m=+0.173132478 container start 77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ptolemy, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:09:21 np0005548915 podman[266624]: 2025-12-06 10:09:21.59587536 +0000 UTC m=+0.177081733 container attach 77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ptolemy, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:09:22 np0005548915 lvm[266716]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:09:22 np0005548915 lvm[266716]: VG ceph_vg0 finished
Dec  6 05:09:22 np0005548915 nova_compute[254819]: 2025-12-06 10:09:22.389 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:22 np0005548915 elegant_ptolemy[266641]: {}
Dec  6 05:09:22 np0005548915 systemd[1]: libpod-77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221.scope: Deactivated successfully.
Dec  6 05:09:22 np0005548915 systemd[1]: libpod-77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221.scope: Consumed 1.325s CPU time.
Dec  6 05:09:22 np0005548915 podman[266624]: 2025-12-06 10:09:22.431093643 +0000 UTC m=+1.012300046 container died 77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ptolemy, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:09:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:22 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18003380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:22 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7deb981d59206cf91756416fcad763f204b1a7c8aa45cc6124f9f520051166b3-merged.mount: Deactivated successfully.
Dec  6 05:09:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:22 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:22 np0005548915 podman[266624]: 2025-12-06 10:09:22.620599328 +0000 UTC m=+1.201805701 container remove 77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  6 05:09:22 np0005548915 systemd[1]: libpod-conmon-77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221.scope: Deactivated successfully.
Dec  6 05:09:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:09:22 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:22 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:09:22 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v847: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Dec  6 05:09:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:22 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:09:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:23.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:09:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:09:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:23.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:09:23 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:23 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:09:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:09:23
Dec  6 05:09:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:09:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:09:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.meta', '.nfs', 'volumes', 'images', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', '.mgr', 'default.rgw.control']
Dec  6 05:09:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:09:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:09:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:09:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:09:23 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:09:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011083224466041544 of space, bias 1.0, pg target 0.3324967339812463 quantized to 32 (current 32)
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:09:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:24 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e00003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:09:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:24 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18003380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:24 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:24.802 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:09:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v848: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 81 op/s
Dec  6 05:09:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:24 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:25.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:09:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:25.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:09:25 np0005548915 nova_compute[254819]: 2025-12-06 10:09:25.598 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:26 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:26 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e00003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v849: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 81 op/s
Dec  6 05:09:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:26 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18003380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:27.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:27.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:27.286Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:09:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:27.286Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:09:27 np0005548915 nova_compute[254819]: 2025-12-06 10:09:27.393 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:28 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:28 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v850: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec  6 05:09:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:28.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:09:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:28.866Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:09:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:28.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:09:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:28 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e00003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:09:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:29.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:29.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:30 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18003380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:30 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:30 np0005548915 nova_compute[254819]: 2025-12-06 10:09:30.600 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v851: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 64 op/s
Dec  6 05:09:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:30] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Dec  6 05:09:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:30] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Dec  6 05:09:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:30 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14003a30 fd 38 proxy ignored for local
Dec  6 05:09:30 np0005548915 kernel: ganesha.nfsd[265892]: segfault at 50 ip 00007f3ed46ea32e sp 00007f3ea67fb210 error 4 in libntirpc.so.5.8[7f3ed46cf000+2c000] likely on CPU 1 (core 0, socket 1)
Dec  6 05:09:30 np0005548915 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  6 05:09:31 np0005548915 systemd[1]: Started Process Core Dump (PID 266767/UID 0).
Dec  6 05:09:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:31.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:31.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:32 np0005548915 nova_compute[254819]: 2025-12-06 10:09:32.397 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v852: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 64 op/s
Dec  6 05:09:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:09:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:33.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:09:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:33.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:33 np0005548915 podman[266797]: 2025-12-06 10:09:33.455503875 +0000 UTC m=+0.081096569 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec  6 05:09:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:09:34 np0005548915 systemd-coredump[266768]: Process 265791 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 44:#012#0  0x00007f3ed46ea32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  6 05:09:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v853: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Dec  6 05:09:34 np0005548915 systemd[1]: systemd-coredump@10-266767-0.service: Deactivated successfully.
Dec  6 05:09:34 np0005548915 systemd[1]: systemd-coredump@10-266767-0.service: Consumed 1.292s CPU time.
Dec  6 05:09:34 np0005548915 podman[266823]: 2025-12-06 10:09:34.903554506 +0000 UTC m=+0.030575608 container died af69e9a47df8ecde800ecab5adbfc1ec516b668507faf977fed781c1bc7fd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:09:34 np0005548915 systemd[1]: var-lib-containers-storage-overlay-042dbc87b14c7e62417a7a4804c45e91c691e54e6f21f825478d88a0b2bd6aee-merged.mount: Deactivated successfully.
Dec  6 05:09:35 np0005548915 podman[266823]: 2025-12-06 10:09:35.065626708 +0000 UTC m=+0.192647770 container remove af69e9a47df8ecde800ecab5adbfc1ec516b668507faf977fed781c1bc7fd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  6 05:09:35 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec  6 05:09:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  6 05:09:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:35.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  6 05:09:35 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec  6 05:09:35 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.496s CPU time.
Dec  6 05:09:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:09:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:35.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:09:35 np0005548915 nova_compute[254819]: 2025-12-06 10:09:35.643 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v854: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  6 05:09:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:37.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:37.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:37.287Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:09:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:37.288Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:09:37 np0005548915 nova_compute[254819]: 2025-12-06 10:09:37.401 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 05:09:37 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 2905 syncs, 3.80 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1859 writes, 5432 keys, 1859 commit groups, 1.0 writes per commit group, ingest: 5.24 MB, 0.01 MB/s#012Interval WAL: 1859 writes, 801 syncs, 2.32 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 05:09:38 np0005548915 podman[266870]: 2025-12-06 10:09:38.499715659 +0000 UTC m=+0.125453814 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 05:09:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v855: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  6 05:09:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:38.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:09:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:09:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:09:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100938 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:09:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:39.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:09:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:39.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:09:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:09:40 np0005548915 nova_compute[254819]: 2025-12-06 10:09:40.692 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:40 np0005548915 nova_compute[254819]: 2025-12-06 10:09:40.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:09:40 np0005548915 nova_compute[254819]: 2025-12-06 10:09:40.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:09:40 np0005548915 nova_compute[254819]: 2025-12-06 10:09:40.776 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:09:40 np0005548915 nova_compute[254819]: 2025-12-06 10:09:40.777 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:09:40 np0005548915 nova_compute[254819]: 2025-12-06 10:09:40.777 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:09:40 np0005548915 nova_compute[254819]: 2025-12-06 10:09:40.777 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:09:40 np0005548915 nova_compute[254819]: 2025-12-06 10:09:40.778 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:09:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v856: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  6 05:09:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:40] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Dec  6 05:09:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:40] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Dec  6 05:09:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:41.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:09:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:41.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:09:41 np0005548915 nova_compute[254819]: 2025-12-06 10:09:41.580 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.802s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:09:41 np0005548915 nova_compute[254819]: 2025-12-06 10:09:41.805 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:09:41 np0005548915 nova_compute[254819]: 2025-12-06 10:09:41.805 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:09:42 np0005548915 nova_compute[254819]: 2025-12-06 10:09:42.009 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:09:42 np0005548915 nova_compute[254819]: 2025-12-06 10:09:42.010 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4410MB free_disk=59.89716339111328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:09:42 np0005548915 nova_compute[254819]: 2025-12-06 10:09:42.011 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:09:42 np0005548915 nova_compute[254819]: 2025-12-06 10:09:42.011 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:09:42 np0005548915 nova_compute[254819]: 2025-12-06 10:09:42.090 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 112440c2-8dcc-4a19-9d83-5489df97079a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 05:09:42 np0005548915 nova_compute[254819]: 2025-12-06 10:09:42.091 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:09:42 np0005548915 nova_compute[254819]: 2025-12-06 10:09:42.091 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:09:42 np0005548915 nova_compute[254819]: 2025-12-06 10:09:42.130 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:09:42 np0005548915 nova_compute[254819]: 2025-12-06 10:09:42.404 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:09:42 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/979909076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:09:42 np0005548915 nova_compute[254819]: 2025-12-06 10:09:42.635 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:09:42 np0005548915 nova_compute[254819]: 2025-12-06 10:09:42.642 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:09:42 np0005548915 nova_compute[254819]: 2025-12-06 10:09:42.677 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:09:42 np0005548915 nova_compute[254819]: 2025-12-06 10:09:42.682 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:09:42 np0005548915 nova_compute[254819]: 2025-12-06 10:09:42.683 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:09:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v857: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  6 05:09:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:09:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:43.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:09:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:43.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:43 np0005548915 podman[266946]: 2025-12-06 10:09:43.454579614 +0000 UTC m=+0.087645923 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  6 05:09:44 np0005548915 nova_compute[254819]: 2025-12-06 10:09:44.684 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:09:44 np0005548915 nova_compute[254819]: 2025-12-06 10:09:44.684 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:09:44 np0005548915 nova_compute[254819]: 2025-12-06 10:09:44.685 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:09:44 np0005548915 nova_compute[254819]: 2025-12-06 10:09:44.685 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:09:44 np0005548915 nova_compute[254819]: 2025-12-06 10:09:44.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:09:44 np0005548915 nova_compute[254819]: 2025-12-06 10:09:44.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:09:44 np0005548915 nova_compute[254819]: 2025-12-06 10:09:44.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:09:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:09:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v858: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec  6 05:09:44 np0005548915 nova_compute[254819]: 2025-12-06 10:09:44.958 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:09:44 np0005548915 nova_compute[254819]: 2025-12-06 10:09:44.959 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:09:44 np0005548915 nova_compute[254819]: 2025-12-06 10:09:44.959 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 05:09:44 np0005548915 nova_compute[254819]: 2025-12-06 10:09:44.960 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 112440c2-8dcc-4a19-9d83-5489df97079a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:09:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:09:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:45.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:09:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:45.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:45 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 11.
Dec  6 05:09:45 np0005548915 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 05:09:45 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.496s CPU time.
Dec  6 05:09:45 np0005548915 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.620 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.621 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.621 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.621 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.622 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.624 254824 INFO nova.compute.manager [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Terminating instance#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.625 254824 DEBUG nova.compute.manager [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 05:09:45 np0005548915 podman[267021]: 2025-12-06 10:09:45.680899575 +0000 UTC m=+0.071234224 container create c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  6 05:09:45 np0005548915 kernel: tap2d0118f7-94 (unregistering): left promiscuous mode
Dec  6 05:09:45 np0005548915 NetworkManager[48882]: <info>  [1765015785.6914] device (tap2d0118f7-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 05:09:45 np0005548915 ovn_controller[152417]: 2025-12-06T10:09:45Z|00062|binding|INFO|Releasing lport 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae from this chassis (sb_readonly=0)
Dec  6 05:09:45 np0005548915 ovn_controller[152417]: 2025-12-06T10:09:45Z|00063|binding|INFO|Setting lport 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae down in Southbound
Dec  6 05:09:45 np0005548915 ovn_controller[152417]: 2025-12-06T10:09:45Z|00064|binding|INFO|Removing iface tap2d0118f7-94 ovn-installed in OVS
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.736 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.739 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:45 np0005548915 podman[267021]: 2025-12-06 10:09:45.657131251 +0000 UTC m=+0.047465980 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:09:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:45.750 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:37:0e 10.100.0.5'], port_security=['fa:16:3e:b4:37:0e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '112440c2-8dcc-4a19-9d83-5489df97079a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3027a471-10b5-4a61-b09a-0f0e6072fde1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=611cd505-2a02-4d45-a906-bd97d1447953, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:09:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:45.751 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae in datapath dccd9941-4f3e-4086-b9cd-651d8e99e8ec unbound from our chassis#033[00m
Dec  6 05:09:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:45.753 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dccd9941-4f3e-4086-b9cd-651d8e99e8ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 05:09:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:45.754 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[fee4a75d-615e-465b-ab9b-aebdfe48c8d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:09:45 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:45.755 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec namespace which is not needed anymore#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.766 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1734ccd679f2dc6c6c68ccfec5ec524b9e349d18b823990645a69f0aafaa48d8/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1734ccd679f2dc6c6c68ccfec5ec524b9e349d18b823990645a69f0aafaa48d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1734ccd679f2dc6c6c68ccfec5ec524b9e349d18b823990645a69f0aafaa48d8/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1734ccd679f2dc6c6c68ccfec5ec524b9e349d18b823990645a69f0aafaa48d8/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:09:45 np0005548915 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec  6 05:09:45 np0005548915 podman[267021]: 2025-12-06 10:09:45.802245688 +0000 UTC m=+0.192580387 container init c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  6 05:09:45 np0005548915 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Consumed 15.777s CPU time.
Dec  6 05:09:45 np0005548915 systemd-machined[216202]: Machine qemu-3-instance-00000004 terminated.
Dec  6 05:09:45 np0005548915 podman[267021]: 2025-12-06 10:09:45.809884253 +0000 UTC m=+0.200218912 container start c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:09:45 np0005548915 bash[267021]: c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16
Dec  6 05:09:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  6 05:09:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  6 05:09:45 np0005548915 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.850 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.856 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.866 254824 INFO nova.virt.libvirt.driver [-] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Instance destroyed successfully.#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.867 254824 DEBUG nova.objects.instance [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 112440c2-8dcc-4a19-9d83-5489df97079a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.880 254824 DEBUG nova.virt.libvirt.vif [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:08:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-609462386',display_name='tempest-TestNetworkBasicOps-server-609462386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-609462386',id=4,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEB71wqy4Vx0ThrIuit7bIMfXK6YLKUBZN1lhipBZkl9t8qtDE6kg/NsSamOzTH/a+zjpG46+Awuo3QHJ780QH0C6lo/2uOHg18NVMuqh+pfDOXzTKYCxhRCIxLSg0ck4w==',key_name='tempest-TestNetworkBasicOps-1991615071',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:08:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-ykqs2wqw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:08:41Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=112440c2-8dcc-4a19-9d83-5489df97079a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.880 254824 DEBUG nova.network.os_vif_util [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.881 254824 DEBUG nova.network.os_vif_util [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:37:0e,bridge_name='br-int',has_traffic_filtering=True,id=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae,network=Network(dccd9941-4f3e-4086-b9cd-651d8e99e8ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d0118f7-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.882 254824 DEBUG os_vif [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:37:0e,bridge_name='br-int',has_traffic_filtering=True,id=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae,network=Network(dccd9941-4f3e-4086-b9cd-651d8e99e8ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d0118f7-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.884 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.885 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d0118f7-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:09:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  6 05:09:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.887 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.890 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:09:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  6 05:09:45 np0005548915 nova_compute[254819]: 2025-12-06 10:09:45.897 254824 INFO os_vif [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:37:0e,bridge_name='br-int',has_traffic_filtering=True,id=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae,network=Network(dccd9941-4f3e-4086-b9cd-651d8e99e8ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d0118f7-94')#033[00m
Dec  6 05:09:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  6 05:09:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 05:09:45 np0005548915 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [NOTICE]   (265656) : haproxy version is 2.8.14-c23fe91
Dec  6 05:09:45 np0005548915 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [NOTICE]   (265656) : path to executable is /usr/sbin/haproxy
Dec  6 05:09:45 np0005548915 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [WARNING]  (265656) : Exiting Master process...
Dec  6 05:09:45 np0005548915 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [WARNING]  (265656) : Exiting Master process...
Dec  6 05:09:45 np0005548915 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [ALERT]    (265656) : Current worker (265658) exited with code 143 (Terminated)
Dec  6 05:09:45 np0005548915 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [WARNING]  (265656) : All workers exited. Exiting... (0)
Dec  6 05:09:45 np0005548915 systemd[1]: libpod-b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9.scope: Deactivated successfully.
Dec  6 05:09:45 np0005548915 podman[267090]: 2025-12-06 10:09:45.968823741 +0000 UTC m=+0.055850414 container died b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  6 05:09:46 np0005548915 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9-userdata-shm.mount: Deactivated successfully.
Dec  6 05:09:46 np0005548915 systemd[1]: var-lib-containers-storage-overlay-cb19a4954b41a05ededd94f9209e0e9572500e71415f9c5c428921ac41b73efd-merged.mount: Deactivated successfully.
Dec  6 05:09:46 np0005548915 podman[267090]: 2025-12-06 10:09:46.020979524 +0000 UTC m=+0.108006197 container cleanup b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:09:46 np0005548915 systemd[1]: libpod-conmon-b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9.scope: Deactivated successfully.
Dec  6 05:09:46 np0005548915 podman[267161]: 2025-12-06 10:09:46.087746068 +0000 UTC m=+0.044496550 container remove b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:09:46 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.106 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b6462c-3d40-4af0-9ec2-c5dcb6a12ada]: (4, ('Sat Dec  6 10:09:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec (b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9)\nb2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9\nSat Dec  6 10:09:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec (b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9)\nb2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:09:46 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.109 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[3f682c74-65bb-4317-b8e8-dea8c4ef13b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:09:46 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.111 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdccd9941-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.114 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:46 np0005548915 kernel: tapdccd9941-40: left promiscuous mode
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.127 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:46 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.132 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[2f97218f-6b79-4539-84ad-9661af72f9fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:09:46 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.155 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[141c129e-5d91-4a08-975a-3246fe731e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:09:46 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.157 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa083e4-d77d-4d59-96fe-04cea738c0e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:09:46 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.188 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[78d7f307-d689-4a41-b71c-c053d10a1a99]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409069, 'reachable_time': 44376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267177, 'error': None, 'target': 'ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:09:46 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.191 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 05:09:46 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.191 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[2c85a3d2-3c5a-42b5-b9ee-d2ddeb756e69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:09:46 np0005548915 systemd[1]: run-netns-ovnmeta\x2ddccd9941\x2d4f3e\x2d4086\x2db9cd\x2d651d8e99e8ec.mount: Deactivated successfully.
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.340 254824 INFO nova.virt.libvirt.driver [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Deleting instance files /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a_del#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.341 254824 INFO nova.virt.libvirt.driver [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Deletion of /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a_del complete#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.392 254824 DEBUG nova.compute.manager [req-dc0abd97-ec69-4a80-858e-f55932d06c64 req-75b66201-a11e-4a04-a75d-70768bf5a872 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-vif-unplugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.393 254824 DEBUG oslo_concurrency.lockutils [req-dc0abd97-ec69-4a80-858e-f55932d06c64 req-75b66201-a11e-4a04-a75d-70768bf5a872 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.393 254824 DEBUG oslo_concurrency.lockutils [req-dc0abd97-ec69-4a80-858e-f55932d06c64 req-75b66201-a11e-4a04-a75d-70768bf5a872 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.394 254824 DEBUG oslo_concurrency.lockutils [req-dc0abd97-ec69-4a80-858e-f55932d06c64 req-75b66201-a11e-4a04-a75d-70768bf5a872 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.394 254824 DEBUG nova.compute.manager [req-dc0abd97-ec69-4a80-858e-f55932d06c64 req-75b66201-a11e-4a04-a75d-70768bf5a872 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] No waiting events found dispatching network-vif-unplugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.395 254824 DEBUG nova.compute.manager [req-dc0abd97-ec69-4a80-858e-f55932d06c64 req-75b66201-a11e-4a04-a75d-70768bf5a872 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-vif-unplugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.422 254824 INFO nova.compute.manager [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.423 254824 DEBUG oslo.service.loopingcall [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.424 254824 DEBUG nova.compute.manager [-] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.425 254824 DEBUG nova.network.neutron [-] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.470 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updating instance_info_cache with network_info: [{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.493 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.494 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.494 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:09:46 np0005548915 nova_compute[254819]: 2025-12-06 10:09:46.494 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:09:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v859: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec  6 05:09:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:47.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:47.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:47.288Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:09:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:47.288Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:09:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:47.289Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:09:47 np0005548915 nova_compute[254819]: 2025-12-06 10:09:47.303 254824 DEBUG nova.network.neutron [-] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:09:47 np0005548915 nova_compute[254819]: 2025-12-06 10:09:47.323 254824 INFO nova.compute.manager [-] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Took 0.90 seconds to deallocate network for instance.#033[00m
Dec  6 05:09:47 np0005548915 nova_compute[254819]: 2025-12-06 10:09:47.365 254824 DEBUG nova.compute.manager [req-5c35bc26-66a7-4a41-9e24-dca0e7864753 req-414fb5e5-1b72-4d5b-836f-a936427cdaf3 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-vif-deleted-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:09:47 np0005548915 nova_compute[254819]: 2025-12-06 10:09:47.368 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:09:47 np0005548915 nova_compute[254819]: 2025-12-06 10:09:47.368 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:09:47 np0005548915 nova_compute[254819]: 2025-12-06 10:09:47.416 254824 DEBUG oslo_concurrency.processutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:09:49 np0005548915 nova_compute[254819]: 2025-12-06 10:09:48.480 254824 DEBUG nova.compute.manager [req-e41b1a2b-d301-400a-9056-61a7e4ed1042 req-2f8680d5-bb05-4958-b750-5cc17eaa14bd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:09:49 np0005548915 nova_compute[254819]: 2025-12-06 10:09:48.482 254824 DEBUG oslo_concurrency.lockutils [req-e41b1a2b-d301-400a-9056-61a7e4ed1042 req-2f8680d5-bb05-4958-b750-5cc17eaa14bd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:09:49 np0005548915 nova_compute[254819]: 2025-12-06 10:09:48.483 254824 DEBUG oslo_concurrency.lockutils [req-e41b1a2b-d301-400a-9056-61a7e4ed1042 req-2f8680d5-bb05-4958-b750-5cc17eaa14bd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:09:49 np0005548915 nova_compute[254819]: 2025-12-06 10:09:48.483 254824 DEBUG oslo_concurrency.lockutils [req-e41b1a2b-d301-400a-9056-61a7e4ed1042 req-2f8680d5-bb05-4958-b750-5cc17eaa14bd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:09:49 np0005548915 nova_compute[254819]: 2025-12-06 10:09:48.484 254824 DEBUG nova.compute.manager [req-e41b1a2b-d301-400a-9056-61a7e4ed1042 req-2f8680d5-bb05-4958-b750-5cc17eaa14bd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] No waiting events found dispatching network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:09:49 np0005548915 nova_compute[254819]: 2025-12-06 10:09:48.485 254824 WARNING nova.compute.manager [req-e41b1a2b-d301-400a-9056-61a7e4ed1042 req-2f8680d5-bb05-4958-b750-5cc17eaa14bd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received unexpected event network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae for instance with vm_state deleted and task_state None.#033[00m
Dec  6 05:09:49 np0005548915 nova_compute[254819]: 2025-12-06 10:09:48.487 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:09:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v860: 337 pgs: 337 active+clean; 48 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 14 KiB/s wr, 53 op/s
Dec  6 05:09:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:49.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:09:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:09:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/279661780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:09:49 np0005548915 nova_compute[254819]: 2025-12-06 10:09:49.046 254824 DEBUG oslo_concurrency.processutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:09:49 np0005548915 nova_compute[254819]: 2025-12-06 10:09:49.055 254824 DEBUG nova.compute.provider_tree [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:09:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:49.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:49.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:09:50 np0005548915 nova_compute[254819]: 2025-12-06 10:09:50.743 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v861: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Dec  6 05:09:50 np0005548915 nova_compute[254819]: 2025-12-06 10:09:50.887 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:50] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Dec  6 05:09:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:50] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Dec  6 05:09:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:51.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:51 np0005548915 nova_compute[254819]: 2025-12-06 10:09:51.138 254824 DEBUG nova.scheduler.client.report [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:09:51 np0005548915 nova_compute[254819]: 2025-12-06 10:09:51.177 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 3.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:09:51 np0005548915 nova_compute[254819]: 2025-12-06 10:09:51.248 254824 INFO nova.scheduler.client.report [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 112440c2-8dcc-4a19-9d83-5489df97079a#033[00m
Dec  6 05:09:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:51.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:51 np0005548915 nova_compute[254819]: 2025-12-06 10:09:51.309 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:09:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:09:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:09:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v862: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Dec  6 05:09:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:09:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:53.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:09:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:53.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:09:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:09:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:09:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:09:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:09:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:09:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:09:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:09:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:54.241 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:09:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:54.242 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:09:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:09:54.242 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:09:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:09:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v863: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.2 KiB/s wr, 58 op/s
Dec  6 05:09:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:55.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:09:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:55.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:09:55 np0005548915 nova_compute[254819]: 2025-12-06 10:09:55.744 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:55 np0005548915 nova_compute[254819]: 2025-12-06 10:09:55.889 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:56 np0005548915 nova_compute[254819]: 2025-12-06 10:09:56.292 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:56 np0005548915 nova_compute[254819]: 2025-12-06 10:09:56.405 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:09:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v864: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 30 op/s
Dec  6 05:09:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:57.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:57.290Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:09:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:57.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4000fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v865: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 31 op/s
Dec  6 05:09:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:09:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:59.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:09:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:59.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:09:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:09:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:59.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:09:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:10:00 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Dec  6 05:10:00 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Dec  6 05:10:00 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.2.0.compute-0.dfwxck on compute-0 is in unknown state
Dec  6 05:10:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:00 np0005548915 nova_compute[254819]: 2025-12-06 10:10:00.748 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v866: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.4 KiB/s wr, 5 op/s
Dec  6 05:10:00 np0005548915 nova_compute[254819]: 2025-12-06 10:10:00.864 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765015785.8629587, 112440c2-8dcc-4a19-9d83-5489df97079a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:10:00 np0005548915 nova_compute[254819]: 2025-12-06 10:10:00.864 254824 INFO nova.compute.manager [-] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] VM Stopped (Lifecycle Event)#033[00m
Dec  6 05:10:00 np0005548915 nova_compute[254819]: 2025-12-06 10:10:00.887 254824 DEBUG nova.compute.manager [None req-0208a515-bbfb-4354-9b62-fa978d41f879 - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:10:00 np0005548915 nova_compute[254819]: 2025-12-06 10:10:00.891 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:10:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:10:00 np0005548915 ceph-mon[74327]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Dec  6 05:10:00 np0005548915 ceph-mon[74327]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Dec  6 05:10:00 np0005548915 ceph-mon[74327]:    daemon nfs.cephfs.2.0.compute-0.dfwxck on compute-0 is in unknown state
Dec  6 05:10:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101001 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 05:10:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:01.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:01.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v867: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec  6 05:10:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:03 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:10:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:03.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:10:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:10:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:03.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:10:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:04 np0005548915 podman[267261]: 2025-12-06 10:10:04.495685982 +0000 UTC m=+0.109299553 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  6 05:10:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:10:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v868: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec  6 05:10:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:05 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:05.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:05.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:05 np0005548915 nova_compute[254819]: 2025-12-06 10:10:05.749 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:05 np0005548915 nova_compute[254819]: 2025-12-06 10:10:05.892 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v869: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:10:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:07 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:10:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:07.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:10:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:07.291Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:10:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:10:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:07.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:10:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v870: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:10:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:10:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:10:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:09 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:09.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:10:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:09.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:09.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:09 np0005548915 podman[267287]: 2025-12-06 10:10:09.502378673 +0000 UTC m=+0.130604411 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 05:10:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:10:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:10 np0005548915 nova_compute[254819]: 2025-12-06 10:10:10.750 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v871: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  6 05:10:10 np0005548915 nova_compute[254819]: 2025-12-06 10:10:10.894 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:10] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  6 05:10:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:10] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  6 05:10:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:11 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:10:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:11.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:10:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:11.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v872: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  6 05:10:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:13 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:13.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:13.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:13 np0005548915 nova_compute[254819]: 2025-12-06 10:10:13.728 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:10:13 np0005548915 nova_compute[254819]: 2025-12-06 10:10:13.729 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:10:13 np0005548915 nova_compute[254819]: 2025-12-06 10:10:13.750 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 05:10:13 np0005548915 nova_compute[254819]: 2025-12-06 10:10:13.837 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:10:13 np0005548915 nova_compute[254819]: 2025-12-06 10:10:13.838 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:10:13 np0005548915 nova_compute[254819]: 2025-12-06 10:10:13.849 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 05:10:13 np0005548915 nova_compute[254819]: 2025-12-06 10:10:13.850 254824 INFO nova.compute.claims [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.000 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:10:14 np0005548915 podman[267363]: 2025-12-06 10:10:14.458074541 +0000 UTC m=+0.084056388 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  6 05:10:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:10:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3879393744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.547 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.556 254824 DEBUG nova.compute.provider_tree [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.595 254824 DEBUG nova.scheduler.client.report [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:10:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.628 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.630 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.681 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.682 254824 DEBUG nova.network.neutron [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.702 254824 INFO nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.719 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 05:10:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.811 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.813 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.814 254824 INFO nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Creating image(s)#033[00m
Dec  6 05:10:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v873: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.852 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.881 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.913 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.918 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:10:14 np0005548915 nova_compute[254819]: 2025-12-06 10:10:14.948 254824 DEBUG nova.policy [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 05:10:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:15 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.008 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.009 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.010 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.010 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.038 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.045 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 467f8e9a-e166-409e-920c-689fea4ea3f6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:10:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:15.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:15.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.352 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 467f8e9a-e166-409e-920c-689fea4ea3f6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.437 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.569 254824 DEBUG nova.objects.instance [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.586 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.587 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Ensure instance console log exists: /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.588 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.589 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.590 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.601 254824 DEBUG nova.network.neutron [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Successfully created port: ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.765 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:15 np0005548915 nova_compute[254819]: 2025-12-06 10:10:15.896 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:16 np0005548915 nova_compute[254819]: 2025-12-06 10:10:16.671 254824 DEBUG nova.network.neutron [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Successfully updated port: ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 05:10:16 np0005548915 nova_compute[254819]: 2025-12-06 10:10:16.689 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:10:16 np0005548915 nova_compute[254819]: 2025-12-06 10:10:16.690 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:10:16 np0005548915 nova_compute[254819]: 2025-12-06 10:10:16.690 254824 DEBUG nova.network.neutron [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 05:10:16 np0005548915 nova_compute[254819]: 2025-12-06 10:10:16.761 254824 DEBUG nova.compute.manager [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-changed-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:10:16 np0005548915 nova_compute[254819]: 2025-12-06 10:10:16.762 254824 DEBUG nova.compute.manager [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing instance network info cache due to event network-changed-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:10:16 np0005548915 nova_compute[254819]: 2025-12-06 10:10:16.762 254824 DEBUG oslo_concurrency.lockutils [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:10:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v874: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:10:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:17 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.146 254824 DEBUG nova.network.neutron [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 05:10:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:10:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:17.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:10:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:17.292Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:10:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:17.293Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:10:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:17.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.796 254824 DEBUG nova.network.neutron [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.820 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.821 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Instance network_info: |[{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.823 254824 DEBUG oslo_concurrency.lockutils [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.824 254824 DEBUG nova.network.neutron [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing network info cache for port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.830 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Start _get_guest_xml network_info=[{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.837 254824 WARNING nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.847 254824 DEBUG nova.virt.libvirt.host [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.848 254824 DEBUG nova.virt.libvirt.host [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.852 254824 DEBUG nova.virt.libvirt.host [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.853 254824 DEBUG nova.virt.libvirt.host [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.853 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.854 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.855 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.856 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.856 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.857 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.857 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.858 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.858 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.859 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.859 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.859 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 05:10:17 np0005548915 nova_compute[254819]: 2025-12-06 10:10:17.865 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:10:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:10:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4273770741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.325 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.398 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.403 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:10:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:10:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944879998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:10:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v875: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.845 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.847 254824 DEBUG nova.virt.libvirt.vif [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:10:14Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.847 254824 DEBUG nova.network.os_vif_util [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.848 254824 DEBUG nova.network.os_vif_util [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:9d:d4,bridge_name='br-int',has_traffic_filtering=True,id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b,network=Network(4d76af3c-ede9-445b-bea0-ba96a2eaeddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bc9a6-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.849 254824 DEBUG nova.objects.instance [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.866 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] End _get_guest_xml xml=<domain type="kvm">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  <uuid>467f8e9a-e166-409e-920c-689fea4ea3f6</uuid>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  <name>instance-00000006</name>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  <memory>131072</memory>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  <vcpu>1</vcpu>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <nova:creationTime>2025-12-06 10:10:17</nova:creationTime>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <nova:flavor name="m1.nano">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <nova:memory>128</nova:memory>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <nova:disk>1</nova:disk>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <nova:swap>0</nova:swap>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <nova:vcpus>1</nova:vcpus>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      </nova:flavor>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <nova:owner>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      </nova:owner>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <nova:ports>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        </nova:port>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      </nova:ports>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    </nova:instance>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  <sysinfo type="smbios">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <entry name="manufacturer">RDO</entry>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <entry name="product">OpenStack Compute</entry>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <entry name="serial">467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <entry name="uuid">467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <entry name="family">Virtual Machine</entry>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <boot dev="hd"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <smbios mode="sysinfo"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <vmcoreinfo/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  <clock offset="utc">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <timer name="hpet" present="no"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  <cpu mode="host-model" match="exact">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <disk type="network" device="disk">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <target dev="vda" bus="virtio"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <disk type="network" device="cdrom">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <target dev="sda" bus="sata"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <interface type="ethernet">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <mac address="fa:16:3e:64:9d:d4"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <mtu size="1442"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <target dev="tapec2bc9a6-15"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <serial type="pty">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <log file="/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log" append="off"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <input type="tablet" bus="usb"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <rng model="virtio">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <backend model="random">/dev/urandom</backend>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <controller type="usb" index="0"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    <memballoon model="virtio">
Dec  6 05:10:18 np0005548915 nova_compute[254819]:      <stats period="10"/>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:10:18 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:10:18 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:10:18 np0005548915 nova_compute[254819]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.868 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Preparing to wait for external event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.868 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.869 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.869 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.870 254824 DEBUG nova.virt.libvirt.vif [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:10:14Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.870 254824 DEBUG nova.network.os_vif_util [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.871 254824 DEBUG nova.network.os_vif_util [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:9d:d4,bridge_name='br-int',has_traffic_filtering=True,id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b,network=Network(4d76af3c-ede9-445b-bea0-ba96a2eaeddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bc9a6-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.872 254824 DEBUG os_vif [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:9d:d4,bridge_name='br-int',has_traffic_filtering=True,id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b,network=Network(4d76af3c-ede9-445b-bea0-ba96a2eaeddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bc9a6-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.873 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.873 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.874 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.878 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.878 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec2bc9a6-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.879 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec2bc9a6-15, col_values=(('external_ids', {'iface-id': 'ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:9d:d4', 'vm-uuid': '467f8e9a-e166-409e-920c-689fea4ea3f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.880 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:18 np0005548915 NetworkManager[48882]: <info>  [1765015818.8814] manager: (tapec2bc9a6-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.882 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.891 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.892 254824 INFO os_vif [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:9d:d4,bridge_name='br-int',has_traffic_filtering=True,id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b,network=Network(4d76af3c-ede9-445b-bea0-ba96a2eaeddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bc9a6-15')#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.917 254824 DEBUG nova.network.neutron [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated VIF entry in instance network info cache for port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.918 254824 DEBUG nova.network.neutron [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.952 254824 DEBUG oslo_concurrency.lockutils [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.975 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.975 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.976 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:64:9d:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 05:10:18 np0005548915 nova_compute[254819]: 2025-12-06 10:10:18.977 254824 INFO nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Using config drive#033[00m
Dec  6 05:10:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:19.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:10:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:19.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:10:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:19 np0005548915 nova_compute[254819]: 2025-12-06 10:10:19.019 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:10:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:10:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:19.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:10:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:10:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:19.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:10:19 np0005548915 nova_compute[254819]: 2025-12-06 10:10:19.396 254824 INFO nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Creating config drive at /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/disk.config#033[00m
Dec  6 05:10:19 np0005548915 nova_compute[254819]: 2025-12-06 10:10:19.404 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5_jpp5rc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:10:19 np0005548915 nova_compute[254819]: 2025-12-06 10:10:19.550 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5_jpp5rc" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:10:19 np0005548915 nova_compute[254819]: 2025-12-06 10:10:19.601 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:10:19 np0005548915 nova_compute[254819]: 2025-12-06 10:10:19.607 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/disk.config 467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:10:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:10:19 np0005548915 nova_compute[254819]: 2025-12-06 10:10:19.783 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/disk.config 467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:10:19 np0005548915 nova_compute[254819]: 2025-12-06 10:10:19.784 254824 INFO nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Deleting local config drive /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/disk.config because it was imported into RBD.#033[00m
Dec  6 05:10:19 np0005548915 kernel: tapec2bc9a6-15: entered promiscuous mode
Dec  6 05:10:19 np0005548915 NetworkManager[48882]: <info>  [1765015819.8679] manager: (tapec2bc9a6-15): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Dec  6 05:10:19 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:19Z|00065|binding|INFO|Claiming lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for this chassis.
Dec  6 05:10:19 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:19Z|00066|binding|INFO|ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b: Claiming fa:16:3e:64:9d:d4 10.100.0.14
Dec  6 05:10:19 np0005548915 nova_compute[254819]: 2025-12-06 10:10:19.872 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:19 np0005548915 nova_compute[254819]: 2025-12-06 10:10:19.877 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:19 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.888 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:9d:d4 10.100.0.14'], port_security=['fa:16:3e:64:9d:d4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '467f8e9a-e166-409e-920c-689fea4ea3f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04450372-2efd-4ce5-88c7-781d38bca802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=25f33b62-e011-4e1d-9dc2-7927e4f8e59b, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:10:19 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.890 162267 INFO neutron.agent.ovn.metadata.agent [-] Port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b in datapath 4d76af3c-ede9-445b-bea0-ba96a2eaeddd bound to our chassis#033[00m
Dec  6 05:10:19 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.892 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d76af3c-ede9-445b-bea0-ba96a2eaeddd#033[00m
Dec  6 05:10:19 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.910 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[09b5113e-bfbf-435f-ad16-0d3391f6265b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:19 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.912 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d76af3c-e1 in ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 05:10:19 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.915 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d76af3c-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 05:10:19 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.915 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[5a26638b-4c5c-488b-b7b4-c3fbf67bf72f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:19 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.916 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[99280c6c-76e3-4ab0-a872-222d858fc90d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:19 np0005548915 systemd-udevd[267690]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:10:19 np0005548915 systemd-machined[216202]: New machine qemu-4-instance-00000006.
Dec  6 05:10:19 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.933 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[3abf1e0f-ca8e-4f09-9bff-e48b8acba54a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:19 np0005548915 NetworkManager[48882]: <info>  [1765015819.9373] device (tapec2bc9a6-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 05:10:19 np0005548915 NetworkManager[48882]: <info>  [1765015819.9386] device (tapec2bc9a6-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 05:10:19 np0005548915 nova_compute[254819]: 2025-12-06 10:10:19.941 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:19 np0005548915 systemd[1]: Started Virtual Machine qemu-4-instance-00000006.
Dec  6 05:10:19 np0005548915 nova_compute[254819]: 2025-12-06 10:10:19.948 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:19 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:19Z|00067|binding|INFO|Setting lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b ovn-installed in OVS
Dec  6 05:10:19 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:19Z|00068|binding|INFO|Setting lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b up in Southbound
Dec  6 05:10:19 np0005548915 nova_compute[254819]: 2025-12-06 10:10:19.951 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:19 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.955 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[bd727bd4-1aac-46f7-9f04-fe6203924d53]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.999 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[67ede21b-c457-4beb-8ed4-95d0d576a32e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.006 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[986af9f7-dd31-4b3f-b740-0a53e66b2cbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:20 np0005548915 systemd-udevd[267693]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:10:20 np0005548915 NetworkManager[48882]: <info>  [1765015820.0070] manager: (tap4d76af3c-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.048 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[27a86b5e-d1b9-4d1e-94ea-eccbe5fdf1d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.052 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[6b0e404d-2a7b-4d5a-878f-f146463fdafe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:20 np0005548915 NetworkManager[48882]: <info>  [1765015820.0895] device (tap4d76af3c-e0): carrier: link connected
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.100 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8ed00a-2570-42f3-8fd0-230f5d398141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.126 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[67b9480f-fbeb-46e0-b425-1874484a3ac5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d76af3c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419057, 'reachable_time': 23811, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267722, 'error': None, 'target': 'ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.145 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[cf59e66a-ffa1-495b-9d6e-99ff633c2a64]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:d2f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 419057, 'tstamp': 419057}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267738, 'error': None, 'target': 'ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.164 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e6927880-4b98-4d34-ab0d-58e7f279b1fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d76af3c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419057, 'reachable_time': 23811, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267742, 'error': None, 'target': 'ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.200 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9b799b68-51cb-4059-a312-5e2ceb34b198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.278 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[6bead5ac-1116-41ff-8f4e-b29ae413be6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.280 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d76af3c-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.280 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.280 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d76af3c-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:10:20 np0005548915 NetworkManager[48882]: <info>  [1765015820.2839] manager: (tap4d76af3c-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Dec  6 05:10:20 np0005548915 kernel: tap4d76af3c-e0: entered promiscuous mode
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.285 254824 DEBUG nova.compute.manager [req-3726917c-b704-4a0f-a91b-4ce5a7ff5b6c req-71a70845-7dcd-4852-ab9f-b108e2909f77 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.286 254824 DEBUG oslo_concurrency.lockutils [req-3726917c-b704-4a0f-a91b-4ce5a7ff5b6c req-71a70845-7dcd-4852-ab9f-b108e2909f77 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.286 254824 DEBUG oslo_concurrency.lockutils [req-3726917c-b704-4a0f-a91b-4ce5a7ff5b6c req-71a70845-7dcd-4852-ab9f-b108e2909f77 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.286 254824 DEBUG oslo_concurrency.lockutils [req-3726917c-b704-4a0f-a91b-4ce5a7ff5b6c req-71a70845-7dcd-4852-ab9f-b108e2909f77 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.287 254824 DEBUG nova.compute.manager [req-3726917c-b704-4a0f-a91b-4ce5a7ff5b6c req-71a70845-7dcd-4852-ab9f-b108e2909f77 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Processing event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.287 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.287 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d76af3c-e0, col_values=(('external_ids', {'iface-id': '9f6682d5-4069-4017-8320-2e242e2a8f66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:10:20 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:20Z|00069|binding|INFO|Releasing lport 9f6682d5-4069-4017-8320-2e242e2a8f66 from this chassis (sb_readonly=0)
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.289 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d76af3c-ede9-445b-bea0-ba96a2eaeddd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d76af3c-ede9-445b-bea0-ba96a2eaeddd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.290 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[ad41749b-2941-4c31-a3a3-6b1b35ae7d10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.291 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: global
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    log         /dev/log local0 debug
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    log-tag     haproxy-metadata-proxy-4d76af3c-ede9-445b-bea0-ba96a2eaeddd
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    user        root
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    group       root
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    maxconn     1024
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    pidfile     /var/lib/neutron/external/pids/4d76af3c-ede9-445b-bea0-ba96a2eaeddd.pid.haproxy
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    daemon
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: defaults
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    log global
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    mode http
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    option httplog
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    option dontlognull
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    option http-server-close
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    option forwardfor
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    retries                 3
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    timeout http-request    30s
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    timeout connect         30s
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    timeout client          32s
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    timeout server          32s
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    timeout http-keep-alive 30s
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: listen listener
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    bind 169.254.169.254:80
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]:    http-request add-header X-OVN-Network-ID 4d76af3c-ede9-445b-bea0-ba96a2eaeddd
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.291 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'env', 'PROCESS_TAG=haproxy-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d76af3c-ede9-445b-bea0-ba96a2eaeddd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.303 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.323 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015820.3223214, 467f8e9a-e166-409e-920c-689fea4ea3f6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.323 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] VM Started (Lifecycle Event)#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.325 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.330 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.334 254824 INFO nova.virt.libvirt.driver [-] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Instance spawned successfully.#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.335 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.349 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.357 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.361 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.361 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.362 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.362 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.363 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.363 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.390 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.391 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015820.322769, 467f8e9a-e166-409e-920c-689fea4ea3f6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.391 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] VM Paused (Lifecycle Event)#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.402 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.402 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.421 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.425 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015820.32821, 467f8e9a-e166-409e-920c-689fea4ea3f6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.425 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] VM Resumed (Lifecycle Event)#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.429 254824 INFO nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Took 5.62 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.430 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.464 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.467 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:10:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.501 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.511 254824 INFO nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Took 6.72 seconds to build instance.#033[00m
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.530 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:10:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:20 np0005548915 podman[267798]: 2025-12-06 10:10:20.690136211 +0000 UTC m=+0.055131805 container create 64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 05:10:20 np0005548915 systemd[1]: Started libpod-conmon-64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c.scope.
Dec  6 05:10:20 np0005548915 podman[267798]: 2025-12-06 10:10:20.660461178 +0000 UTC m=+0.025456802 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  6 05:10:20 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:10:20 np0005548915 nova_compute[254819]: 2025-12-06 10:10:20.814 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:20 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c35a0a906865a4663842e5ed6b698da4d1040e57a2b60288990c137c9d3376/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v876: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:10:20 np0005548915 podman[267798]: 2025-12-06 10:10:20.840577602 +0000 UTC m=+0.205573206 container init 64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 05:10:20 np0005548915 podman[267798]: 2025-12-06 10:10:20.852345046 +0000 UTC m=+0.217340650 container start 64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 05:10:20 np0005548915 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [NOTICE]   (267815) : New worker (267817) forked
Dec  6 05:10:20 np0005548915 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [NOTICE]   (267815) : Loading success.
Dec  6 05:10:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:20] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  6 05:10:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:20] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  6 05:10:20 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.938 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 05:10:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:21 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:21.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:21.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:22 np0005548915 nova_compute[254819]: 2025-12-06 10:10:22.479 254824 DEBUG nova.compute.manager [req-4bf544c5-3e1b-4f4c-9974-34dcac780633 req-773d7408-990f-4411-b08e-4f165163fd73 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:10:22 np0005548915 nova_compute[254819]: 2025-12-06 10:10:22.480 254824 DEBUG oslo_concurrency.lockutils [req-4bf544c5-3e1b-4f4c-9974-34dcac780633 req-773d7408-990f-4411-b08e-4f165163fd73 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:10:22 np0005548915 nova_compute[254819]: 2025-12-06 10:10:22.481 254824 DEBUG oslo_concurrency.lockutils [req-4bf544c5-3e1b-4f4c-9974-34dcac780633 req-773d7408-990f-4411-b08e-4f165163fd73 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:10:22 np0005548915 nova_compute[254819]: 2025-12-06 10:10:22.481 254824 DEBUG oslo_concurrency.lockutils [req-4bf544c5-3e1b-4f4c-9974-34dcac780633 req-773d7408-990f-4411-b08e-4f165163fd73 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:10:22 np0005548915 nova_compute[254819]: 2025-12-06 10:10:22.481 254824 DEBUG nova.compute.manager [req-4bf544c5-3e1b-4f4c-9974-34dcac780633 req-773d7408-990f-4411-b08e-4f165163fd73 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:10:22 np0005548915 nova_compute[254819]: 2025-12-06 10:10:22.481 254824 WARNING nova.compute.manager [req-4bf544c5-3e1b-4f4c-9974-34dcac780633 req-773d7408-990f-4411-b08e-4f165163fd73 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for instance with vm_state active and task_state None.#033[00m
Dec  6 05:10:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v877: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:10:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:23 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:10:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:23.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:10:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:23.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 05:10:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 05:10:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:23 np0005548915 nova_compute[254819]: 2025-12-06 10:10:23.881 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:10:23
Dec  6 05:10:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:10:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:10:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['.nfs', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.meta', 'volumes', 'vms', '.rgw.root', 'default.rgw.control']
Dec  6 05:10:23 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:23 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:10:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:10:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:10:24 np0005548915 NetworkManager[48882]: <info>  [1765015824.3456] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Dec  6 05:10:24 np0005548915 NetworkManager[48882]: <info>  [1765015824.3471] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Dec  6 05:10:24 np0005548915 nova_compute[254819]: 2025-12-06 10:10:24.344 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:24 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:24Z|00070|binding|INFO|Releasing lport 9f6682d5-4069-4017-8320-2e242e2a8f66 from this chassis (sb_readonly=0)
Dec  6 05:10:24 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:24Z|00071|binding|INFO|Releasing lport 9f6682d5-4069-4017-8320-2e242e2a8f66 from this chassis (sb_readonly=0)
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:10:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:10:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:24 np0005548915 podman[268003]: 2025-12-06 10:10:24.725013307 +0000 UTC m=+0.062631444 container create 6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:10:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:10:24 np0005548915 systemd[1]: Started libpod-conmon-6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07.scope.
Dec  6 05:10:24 np0005548915 podman[268003]: 2025-12-06 10:10:24.696367502 +0000 UTC m=+0.033985659 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:10:24 np0005548915 nova_compute[254819]: 2025-12-06 10:10:24.808 254824 DEBUG nova.compute.manager [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-changed-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:10:24 np0005548915 nova_compute[254819]: 2025-12-06 10:10:24.808 254824 DEBUG nova.compute.manager [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing instance network info cache due to event network-changed-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:10:24 np0005548915 nova_compute[254819]: 2025-12-06 10:10:24.809 254824 DEBUG oslo_concurrency.lockutils [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:10:24 np0005548915 nova_compute[254819]: 2025-12-06 10:10:24.809 254824 DEBUG oslo_concurrency.lockutils [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:10:24 np0005548915 nova_compute[254819]: 2025-12-06 10:10:24.809 254824 DEBUG nova.network.neutron [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing network info cache for port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:10:24 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:10:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v878: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  6 05:10:24 np0005548915 podman[268003]: 2025-12-06 10:10:24.850038509 +0000 UTC m=+0.187656656 container init 6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  6 05:10:24 np0005548915 podman[268003]: 2025-12-06 10:10:24.864857275 +0000 UTC m=+0.202475412 container start 6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:10:24 np0005548915 podman[268003]: 2025-12-06 10:10:24.868469972 +0000 UTC m=+0.206088099 container attach 6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec  6 05:10:24 np0005548915 condescending_rhodes[268020]: 167 167
Dec  6 05:10:24 np0005548915 systemd[1]: libpod-6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07.scope: Deactivated successfully.
Dec  6 05:10:24 np0005548915 conmon[268020]: conmon 6fe5c235c7eef7743df3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07.scope/container/memory.events
Dec  6 05:10:24 np0005548915 podman[268025]: 2025-12-06 10:10:24.934003453 +0000 UTC m=+0.038446819 container died 6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  6 05:10:24 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d9115c5e0e2ed4420303d811e5ef9dcd85e1bd3007e7c9b9528c62aa6af1814d-merged.mount: Deactivated successfully.
Dec  6 05:10:24 np0005548915 podman[268025]: 2025-12-06 10:10:24.977171686 +0000 UTC m=+0.081615042 container remove 6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 05:10:24 np0005548915 systemd[1]: libpod-conmon-6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07.scope: Deactivated successfully.
Dec  6 05:10:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:25.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:25 np0005548915 podman[268047]: 2025-12-06 10:10:25.203550727 +0000 UTC m=+0.066297032 container create 2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_colden, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:10:25 np0005548915 systemd[1]: Started libpod-conmon-2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce.scope.
Dec  6 05:10:25 np0005548915 podman[268047]: 2025-12-06 10:10:25.165990733 +0000 UTC m=+0.028737048 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:10:25 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:10:25 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abd83fc2a8a413cdb1da6a6d03d890b9b2545f64da0ee81224228f974a411e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:25 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abd83fc2a8a413cdb1da6a6d03d890b9b2545f64da0ee81224228f974a411e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:25 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abd83fc2a8a413cdb1da6a6d03d890b9b2545f64da0ee81224228f974a411e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:25 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abd83fc2a8a413cdb1da6a6d03d890b9b2545f64da0ee81224228f974a411e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:25 np0005548915 podman[268047]: 2025-12-06 10:10:25.333754687 +0000 UTC m=+0.196501032 container init 2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_colden, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:10:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:10:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:25.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:10:25 np0005548915 podman[268047]: 2025-12-06 10:10:25.342613054 +0000 UTC m=+0.205359339 container start 2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_colden, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:10:25 np0005548915 podman[268047]: 2025-12-06 10:10:25.347238748 +0000 UTC m=+0.209985093 container attach 2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_colden, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:10:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 05:10:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 05:10:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:25 np0005548915 nova_compute[254819]: 2025-12-06 10:10:25.860 254824 DEBUG nova.network.neutron [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated VIF entry in instance network info cache for port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:10:25 np0005548915 nova_compute[254819]: 2025-12-06 10:10:25.862 254824 DEBUG nova.network.neutron [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:10:25 np0005548915 nova_compute[254819]: 2025-12-06 10:10:25.878 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:25 np0005548915 nova_compute[254819]: 2025-12-06 10:10:25.885 254824 DEBUG oslo_concurrency.lockutils [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:10:25 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:25 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]: [
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:    {
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:        "available": false,
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:        "being_replaced": false,
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:        "ceph_device_lvm": false,
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:        "lsm_data": {},
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:        "lvs": [],
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:        "path": "/dev/sr0",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:        "rejected_reasons": [
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "Has a FileSystem",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "Insufficient space (<5GB)"
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:        ],
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:        "sys_api": {
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "actuators": null,
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "device_nodes": [
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:                "sr0"
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            ],
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "devname": "sr0",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "human_readable_size": "482.00 KB",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "id_bus": "ata",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "model": "QEMU DVD-ROM",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "nr_requests": "2",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "parent": "/dev/sr0",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "partitions": {},
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "path": "/dev/sr0",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "removable": "1",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "rev": "2.5+",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "ro": "0",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "rotational": "1",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "sas_address": "",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "sas_device_handle": "",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "scheduler_mode": "mq-deadline",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "sectors": 0,
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "sectorsize": "2048",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "size": 493568.0,
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "support_discard": "2048",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "type": "disk",
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:            "vendor": "QEMU"
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:        }
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]:    }
Dec  6 05:10:26 np0005548915 relaxed_colden[268063]: ]
Dec  6 05:10:26 np0005548915 systemd[1]: libpod-2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce.scope: Deactivated successfully.
Dec  6 05:10:26 np0005548915 podman[268047]: 2025-12-06 10:10:26.05377193 +0000 UTC m=+0.916518225 container died 2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_colden, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:10:26 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1abd83fc2a8a413cdb1da6a6d03d890b9b2545f64da0ee81224228f974a411e4-merged.mount: Deactivated successfully.
Dec  6 05:10:26 np0005548915 podman[268047]: 2025-12-06 10:10:26.096725738 +0000 UTC m=+0.959472033 container remove 2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_colden, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  6 05:10:26 np0005548915 systemd[1]: libpod-conmon-2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce.scope: Deactivated successfully.
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:10:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v879: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:10:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:10:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:26 np0005548915 podman[269403]: 2025-12-06 10:10:26.815026875 +0000 UTC m=+0.041576311 container create 90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 05:10:26 np0005548915 systemd[1]: Started libpod-conmon-90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503.scope.
Dec  6 05:10:26 np0005548915 podman[269403]: 2025-12-06 10:10:26.797385774 +0000 UTC m=+0.023935230 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:10:26 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:10:26 np0005548915 podman[269403]: 2025-12-06 10:10:26.918038138 +0000 UTC m=+0.144587584 container init 90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  6 05:10:26 np0005548915 podman[269403]: 2025-12-06 10:10:26.931241372 +0000 UTC m=+0.157790808 container start 90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  6 05:10:26 np0005548915 systemd[1]: libpod-90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503.scope: Deactivated successfully.
Dec  6 05:10:26 np0005548915 heuristic_mclean[269419]: 167 167
Dec  6 05:10:26 np0005548915 conmon[269419]: conmon 90dce362cad02980bb2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503.scope/container/memory.events
Dec  6 05:10:26 np0005548915 podman[269403]: 2025-12-06 10:10:26.937309974 +0000 UTC m=+0.163859460 container attach 90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:10:26 np0005548915 podman[269403]: 2025-12-06 10:10:26.944503196 +0000 UTC m=+0.171052632 container died 90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclean, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  6 05:10:26 np0005548915 systemd[1]: var-lib-containers-storage-overlay-69e147027509bd18db6962f8365114ebde36d795824121985e7edebdc0a9ca20-merged.mount: Deactivated successfully.
Dec  6 05:10:26 np0005548915 podman[269403]: 2025-12-06 10:10:26.990104145 +0000 UTC m=+0.216653591 container remove 90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 05:10:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:27 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:27 np0005548915 systemd[1]: libpod-conmon-90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503.scope: Deactivated successfully.
Dec  6 05:10:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:10:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:27.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:10:27 np0005548915 podman[269444]: 2025-12-06 10:10:27.193012217 +0000 UTC m=+0.045775034 container create e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Dec  6 05:10:27 np0005548915 systemd[1]: Started libpod-conmon-e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce.scope.
Dec  6 05:10:27 np0005548915 podman[269444]: 2025-12-06 10:10:27.172928121 +0000 UTC m=+0.025690958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:10:27 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:10:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4e2e2fd6a9ef5a39733a5f4d5c1e533afbc4abea202f37ed114b5543c7fa8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4e2e2fd6a9ef5a39733a5f4d5c1e533afbc4abea202f37ed114b5543c7fa8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4e2e2fd6a9ef5a39733a5f4d5c1e533afbc4abea202f37ed114b5543c7fa8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4e2e2fd6a9ef5a39733a5f4d5c1e533afbc4abea202f37ed114b5543c7fa8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:27 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4e2e2fd6a9ef5a39733a5f4d5c1e533afbc4abea202f37ed114b5543c7fa8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:27 np0005548915 podman[269444]: 2025-12-06 10:10:27.293150504 +0000 UTC m=+0.145913321 container init e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  6 05:10:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:27.295Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:10:27 np0005548915 podman[269444]: 2025-12-06 10:10:27.304699082 +0000 UTC m=+0.157461909 container start e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 05:10:27 np0005548915 podman[269444]: 2025-12-06 10:10:27.309193743 +0000 UTC m=+0.161956560 container attach e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  6 05:10:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:10:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:27.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:10:27 np0005548915 cool_williams[269462]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:10:27 np0005548915 cool_williams[269462]: --> All data devices are unavailable
Dec  6 05:10:27 np0005548915 systemd[1]: libpod-e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce.scope: Deactivated successfully.
Dec  6 05:10:27 np0005548915 podman[269444]: 2025-12-06 10:10:27.685572902 +0000 UTC m=+0.538335759 container died e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  6 05:10:27 np0005548915 systemd[1]: var-lib-containers-storage-overlay-cf4e2e2fd6a9ef5a39733a5f4d5c1e533afbc4abea202f37ed114b5543c7fa8f-merged.mount: Deactivated successfully.
Dec  6 05:10:27 np0005548915 podman[269444]: 2025-12-06 10:10:27.751674929 +0000 UTC m=+0.604437746 container remove e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:10:27 np0005548915 systemd[1]: libpod-conmon-e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce.scope: Deactivated successfully.
Dec  6 05:10:27 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:27 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:27 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:10:27 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:27 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:27 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:10:27 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:27.942 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:10:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v880: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 107 op/s
Dec  6 05:10:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:28 np0005548915 podman[269586]: 2025-12-06 10:10:28.537767487 +0000 UTC m=+0.048168018 container create c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jang, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:10:28 np0005548915 systemd[1]: Started libpod-conmon-c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16.scope.
Dec  6 05:10:28 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:10:28 np0005548915 podman[269586]: 2025-12-06 10:10:28.51613746 +0000 UTC m=+0.026538051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:10:28 np0005548915 podman[269586]: 2025-12-06 10:10:28.622691948 +0000 UTC m=+0.133092549 container init c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jang, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:10:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:28 np0005548915 podman[269586]: 2025-12-06 10:10:28.628841962 +0000 UTC m=+0.139242503 container start c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:10:28 np0005548915 stupefied_jang[269602]: 167 167
Dec  6 05:10:28 np0005548915 systemd[1]: libpod-c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16.scope: Deactivated successfully.
Dec  6 05:10:28 np0005548915 podman[269586]: 2025-12-06 10:10:28.634056601 +0000 UTC m=+0.144457152 container attach c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  6 05:10:28 np0005548915 podman[269586]: 2025-12-06 10:10:28.634778711 +0000 UTC m=+0.145179242 container died c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:10:28 np0005548915 systemd[1]: var-lib-containers-storage-overlay-574308e2ecc55184b698d3022324edb2188a4980fe2ce005678cffa7700c15fc-merged.mount: Deactivated successfully.
Dec  6 05:10:28 np0005548915 podman[269586]: 2025-12-06 10:10:28.673546887 +0000 UTC m=+0.183947448 container remove c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:10:28 np0005548915 systemd[1]: libpod-conmon-c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16.scope: Deactivated successfully.
Dec  6 05:10:28 np0005548915 nova_compute[254819]: 2025-12-06 10:10:28.884 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:28 np0005548915 podman[269627]: 2025-12-06 10:10:28.905122796 +0000 UTC m=+0.055384191 container create a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_benz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:10:28 np0005548915 systemd[1]: Started libpod-conmon-a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1.scope.
Dec  6 05:10:28 np0005548915 podman[269627]: 2025-12-06 10:10:28.878157515 +0000 UTC m=+0.028418950 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:10:28 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:10:28 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a786c0ee1e90bb90eb7661e004ee583a7ffc3963ace5ebb315a7db85c043bd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:28 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a786c0ee1e90bb90eb7661e004ee583a7ffc3963ace5ebb315a7db85c043bd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:28 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a786c0ee1e90bb90eb7661e004ee583a7ffc3963ace5ebb315a7db85c043bd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:28 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a786c0ee1e90bb90eb7661e004ee583a7ffc3963ace5ebb315a7db85c043bd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:29 np0005548915 podman[269627]: 2025-12-06 10:10:29.004177033 +0000 UTC m=+0.154438468 container init a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_benz, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Dec  6 05:10:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:29.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:10:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:29 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:29 np0005548915 podman[269627]: 2025-12-06 10:10:29.017006507 +0000 UTC m=+0.167267892 container start a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_benz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 05:10:29 np0005548915 podman[269627]: 2025-12-06 10:10:29.021617339 +0000 UTC m=+0.171878804 container attach a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Dec  6 05:10:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:10:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:29.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:10:29 np0005548915 jolly_benz[269643]: {
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:    "1": [
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:        {
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:            "devices": [
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:                "/dev/loop3"
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:            ],
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:            "lv_name": "ceph_lv0",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:            "lv_size": "21470642176",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:            "name": "ceph_lv0",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:            "tags": {
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:                "ceph.cluster_name": "ceph",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:                "ceph.crush_device_class": "",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:                "ceph.encrypted": "0",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:                "ceph.osd_id": "1",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:                "ceph.type": "block",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:                "ceph.vdo": "0",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:                "ceph.with_tpm": "0"
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:            },
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:            "type": "block",
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:            "vg_name": "ceph_vg0"
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:        }
Dec  6 05:10:29 np0005548915 jolly_benz[269643]:    ]
Dec  6 05:10:29 np0005548915 jolly_benz[269643]: }
Dec  6 05:10:29 np0005548915 systemd[1]: libpod-a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1.scope: Deactivated successfully.
Dec  6 05:10:29 np0005548915 podman[269627]: 2025-12-06 10:10:29.340881142 +0000 UTC m=+0.491142547 container died a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  6 05:10:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:29.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:29 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7a786c0ee1e90bb90eb7661e004ee583a7ffc3963ace5ebb315a7db85c043bd0-merged.mount: Deactivated successfully.
Dec  6 05:10:29 np0005548915 podman[269627]: 2025-12-06 10:10:29.392931103 +0000 UTC m=+0.543192528 container remove a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_benz, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  6 05:10:29 np0005548915 systemd[1]: libpod-conmon-a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1.scope: Deactivated successfully.
Dec  6 05:10:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:10:30 np0005548915 podman[269757]: 2025-12-06 10:10:30.023217578 +0000 UTC m=+0.044901561 container create f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:10:30 np0005548915 systemd[1]: Started libpod-conmon-f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec.scope.
Dec  6 05:10:30 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:10:30 np0005548915 podman[269757]: 2025-12-06 10:10:30.007265232 +0000 UTC m=+0.028949244 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:10:30 np0005548915 podman[269757]: 2025-12-06 10:10:30.106495893 +0000 UTC m=+0.128179885 container init f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_satoshi, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 05:10:30 np0005548915 podman[269757]: 2025-12-06 10:10:30.112300369 +0000 UTC m=+0.133984351 container start f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_satoshi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  6 05:10:30 np0005548915 podman[269757]: 2025-12-06 10:10:30.116417999 +0000 UTC m=+0.138102001 container attach f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_satoshi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  6 05:10:30 np0005548915 awesome_satoshi[269773]: 167 167
Dec  6 05:10:30 np0005548915 systemd[1]: libpod-f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec.scope: Deactivated successfully.
Dec  6 05:10:30 np0005548915 podman[269757]: 2025-12-06 10:10:30.120258241 +0000 UTC m=+0.141942223 container died f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:10:30 np0005548915 systemd[1]: var-lib-containers-storage-overlay-61b4bfc50a87152324f77b3d413c090c173aa823159075c0bc668bda44058abf-merged.mount: Deactivated successfully.
Dec  6 05:10:30 np0005548915 podman[269757]: 2025-12-06 10:10:30.159362137 +0000 UTC m=+0.181046119 container remove f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 05:10:30 np0005548915 systemd[1]: libpod-conmon-f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec.scope: Deactivated successfully.
Dec  6 05:10:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v881: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 78 op/s
Dec  6 05:10:30 np0005548915 podman[269795]: 2025-12-06 10:10:30.344102934 +0000 UTC m=+0.043807331 container create 70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Dec  6 05:10:30 np0005548915 systemd[1]: Started libpod-conmon-70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e.scope.
Dec  6 05:10:30 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:10:30 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35c4987b866724c1b5d2446cfb92cf1cfee69b5caf0d3f90dbb2889855ea4c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:30 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35c4987b866724c1b5d2446cfb92cf1cfee69b5caf0d3f90dbb2889855ea4c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:30 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35c4987b866724c1b5d2446cfb92cf1cfee69b5caf0d3f90dbb2889855ea4c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:30 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35c4987b866724c1b5d2446cfb92cf1cfee69b5caf0d3f90dbb2889855ea4c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:30 np0005548915 podman[269795]: 2025-12-06 10:10:30.327511401 +0000 UTC m=+0.027215818 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:10:30 np0005548915 podman[269795]: 2025-12-06 10:10:30.430166604 +0000 UTC m=+0.129871001 container init 70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_montalcini, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:10:30 np0005548915 podman[269795]: 2025-12-06 10:10:30.438450506 +0000 UTC m=+0.138154903 container start 70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_montalcini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:10:30 np0005548915 podman[269795]: 2025-12-06 10:10:30.442003951 +0000 UTC m=+0.141708338 container attach 70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 05:10:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:30 np0005548915 nova_compute[254819]: 2025-12-06 10:10:30.881 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:30] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec  6 05:10:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:30] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec  6 05:10:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:31 np0005548915 lvm[269888]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:10:31 np0005548915 lvm[269888]: VG ceph_vg0 finished
Dec  6 05:10:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:31.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:31 np0005548915 upbeat_montalcini[269812]: {}
Dec  6 05:10:31 np0005548915 systemd[1]: libpod-70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e.scope: Deactivated successfully.
Dec  6 05:10:31 np0005548915 systemd[1]: libpod-70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e.scope: Consumed 1.260s CPU time.
Dec  6 05:10:31 np0005548915 conmon[269812]: conmon 70ceb86202459e215321 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e.scope/container/memory.events
Dec  6 05:10:31 np0005548915 podman[269795]: 2025-12-06 10:10:31.226004164 +0000 UTC m=+0.925708551 container died 70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Dec  6 05:10:31 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f35c4987b866724c1b5d2446cfb92cf1cfee69b5caf0d3f90dbb2889855ea4c0-merged.mount: Deactivated successfully.
Dec  6 05:10:31 np0005548915 podman[269795]: 2025-12-06 10:10:31.282232357 +0000 UTC m=+0.981936764 container remove 70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:10:31 np0005548915 systemd[1]: libpod-conmon-70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e.scope: Deactivated successfully.
Dec  6 05:10:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:10:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:10:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:31.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:10:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:10:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:31 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:31 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:10:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v882: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 78 op/s
Dec  6 05:10:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:33 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:10:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:33.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:10:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:10:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:33.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:10:33 np0005548915 nova_compute[254819]: 2025-12-06 10:10:33.889 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v883: 337 pgs: 337 active+clean; 109 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 113 op/s
Dec  6 05:10:34 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:34Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:9d:d4 10.100.0.14
Dec  6 05:10:34 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:34Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:9d:d4 10.100.0.14
Dec  6 05:10:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:10:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:35 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:35.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000052s ======
Dec  6 05:10:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:35.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec  6 05:10:35 np0005548915 podman[269959]: 2025-12-06 10:10:35.445472925 +0000 UTC m=+0.067465364 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:10:35 np0005548915 nova_compute[254819]: 2025-12-06 10:10:35.932 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v884: 337 pgs: 337 active+clean; 109 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 200 KiB/s rd, 2.2 MiB/s wr, 35 op/s
Dec  6 05:10:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:37 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:37.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:37.297Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:10:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:37.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v885: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  6 05:10:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:38 np0005548915 nova_compute[254819]: 2025-12-06 10:10:38.895 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:10:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:10:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:39.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:10:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:39 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:10:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:39.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:10:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:10:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:39.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:10:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:10:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v886: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  6 05:10:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:40 np0005548915 podman[269985]: 2025-12-06 10:10:40.540438716 +0000 UTC m=+0.154566672 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 05:10:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:40] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Dec  6 05:10:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:40] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Dec  6 05:10:40 np0005548915 nova_compute[254819]: 2025-12-06 10:10:40.937 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:41 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:41.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:41 np0005548915 nova_compute[254819]: 2025-12-06 10:10:41.321 254824 INFO nova.compute.manager [None req-b2119cf0-fba3-46d3-9d41-5774c762d718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Get console output#033[00m
Dec  6 05:10:41 np0005548915 nova_compute[254819]: 2025-12-06 10:10:41.327 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 05:10:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:41.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:41 np0005548915 nova_compute[254819]: 2025-12-06 10:10:41.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:10:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v887: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  6 05:10:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:42 np0005548915 nova_compute[254819]: 2025-12-06 10:10:42.760 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:10:42 np0005548915 nova_compute[254819]: 2025-12-06 10:10:42.761 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:10:42 np0005548915 nova_compute[254819]: 2025-12-06 10:10:42.789 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:10:42 np0005548915 nova_compute[254819]: 2025-12-06 10:10:42.789 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:10:42 np0005548915 nova_compute[254819]: 2025-12-06 10:10:42.790 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:10:42 np0005548915 nova_compute[254819]: 2025-12-06 10:10:42.790 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:10:42 np0005548915 nova_compute[254819]: 2025-12-06 10:10:42.790 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:10:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:43 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:10:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:43.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:10:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:10:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/132234717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.238 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.305 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.306 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:10:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:10:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:43.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.499 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.500 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4322MB free_disk=59.94289016723633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.501 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.501 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.565 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 467f8e9a-e166-409e-920c-689fea4ea3f6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.565 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.565 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.637 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing inventories for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.687 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating ProviderTree inventory for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.687 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating inventory in ProviderTree for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.705 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing aggregate associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.725 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing trait associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.763 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:10:43 np0005548915 nova_compute[254819]: 2025-12-06 10:10:43.897 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:10:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3976439775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:10:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v888: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  6 05:10:44 np0005548915 nova_compute[254819]: 2025-12-06 10:10:44.213 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:10:44 np0005548915 nova_compute[254819]: 2025-12-06 10:10:44.220 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:10:44 np0005548915 nova_compute[254819]: 2025-12-06 10:10:44.237 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:10:44 np0005548915 nova_compute[254819]: 2025-12-06 10:10:44.260 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:10:44 np0005548915 nova_compute[254819]: 2025-12-06 10:10:44.260 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:10:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:44 np0005548915 nova_compute[254819]: 2025-12-06 10:10:44.748 254824 DEBUG oslo_concurrency.lockutils [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "interface-467f8e9a-e166-409e-920c-689fea4ea3f6-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:10:44 np0005548915 nova_compute[254819]: 2025-12-06 10:10:44.749 254824 DEBUG oslo_concurrency.lockutils [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-467f8e9a-e166-409e-920c-689fea4ea3f6-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:10:44 np0005548915 nova_compute[254819]: 2025-12-06 10:10:44.750 254824 DEBUG nova.objects.instance [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'flavor' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:10:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:10:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:45.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:45 np0005548915 nova_compute[254819]: 2025-12-06 10:10:45.343 254824 DEBUG nova.objects.instance [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_requests' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:10:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:10:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:45.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:10:45 np0005548915 nova_compute[254819]: 2025-12-06 10:10:45.368 254824 DEBUG nova.network.neutron [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 05:10:45 np0005548915 podman[270061]: 2025-12-06 10:10:45.4515251 +0000 UTC m=+0.073320540 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  6 05:10:45 np0005548915 nova_compute[254819]: 2025-12-06 10:10:45.515 254824 DEBUG nova.policy [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 05:10:45 np0005548915 nova_compute[254819]: 2025-12-06 10:10:45.941 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v889: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 106 KiB/s wr, 31 op/s
Dec  6 05:10:46 np0005548915 nova_compute[254819]: 2025-12-06 10:10:46.243 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:10:46 np0005548915 nova_compute[254819]: 2025-12-06 10:10:46.266 254824 DEBUG nova.network.neutron [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Successfully created port: 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 05:10:46 np0005548915 nova_compute[254819]: 2025-12-06 10:10:46.270 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:10:46 np0005548915 nova_compute[254819]: 2025-12-06 10:10:46.270 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:10:46 np0005548915 nova_compute[254819]: 2025-12-06 10:10:46.270 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:10:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  6 05:10:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/755035782' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 05:10:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  6 05:10:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/755035782' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 05:10:46 np0005548915 nova_compute[254819]: 2025-12-06 10:10:46.498 254824 INFO nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating ports in neutron#033[00m
Dec  6 05:10:46 np0005548915 nova_compute[254819]: 2025-12-06 10:10:46.669 254824 INFO nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Dec  6 05:10:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:47 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:47.298Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:10:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:47.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:47 np0005548915 nova_compute[254819]: 2025-12-06 10:10:47.536 254824 DEBUG nova.network.neutron [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Successfully updated port: 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 05:10:47 np0005548915 nova_compute[254819]: 2025-12-06 10:10:47.550 254824 DEBUG oslo_concurrency.lockutils [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:10:47 np0005548915 nova_compute[254819]: 2025-12-06 10:10:47.550 254824 DEBUG oslo_concurrency.lockutils [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:10:47 np0005548915 nova_compute[254819]: 2025-12-06 10:10:47.550 254824 DEBUG nova.network.neutron [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 05:10:47 np0005548915 nova_compute[254819]: 2025-12-06 10:10:47.663 254824 DEBUG nova.compute.manager [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-changed-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:10:47 np0005548915 nova_compute[254819]: 2025-12-06 10:10:47.663 254824 DEBUG nova.compute.manager [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing instance network info cache due to event network-changed-88b1b4c6-36ba-46c8-baa2-da5b266af4d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:10:47 np0005548915 nova_compute[254819]: 2025-12-06 10:10:47.663 254824 DEBUG oslo_concurrency.lockutils [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:10:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v890: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 111 KiB/s wr, 31 op/s
Dec  6 05:10:48 np0005548915 nova_compute[254819]: 2025-12-06 10:10:48.272 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:10:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:48 np0005548915 nova_compute[254819]: 2025-12-06 10:10:48.899 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:49.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:10:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:10:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:49.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:10:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:49.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:49 np0005548915 nova_compute[254819]: 2025-12-06 10:10:49.770 254824 DEBUG nova.compute.manager [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-changed-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:10:49 np0005548915 nova_compute[254819]: 2025-12-06 10:10:49.770 254824 DEBUG nova.compute.manager [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing instance network info cache due to event network-changed-88b1b4c6-36ba-46c8-baa2-da5b266af4d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:10:49 np0005548915 nova_compute[254819]: 2025-12-06 10:10:49.770 254824 DEBUG oslo_concurrency.lockutils [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:10:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.171 254824 DEBUG nova.network.neutron [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:10:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v891: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 16 KiB/s wr, 1 op/s
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.194 254824 DEBUG oslo_concurrency.lockutils [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.196 254824 DEBUG oslo_concurrency.lockutils [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.197 254824 DEBUG nova.network.neutron [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing network info cache for port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.202 254824 DEBUG nova.virt.libvirt.vif [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:10:20Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.203 254824 DEBUG nova.network.os_vif_util [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.204 254824 DEBUG nova.network.os_vif_util [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.205 254824 DEBUG os_vif [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.206 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.207 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.208 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.211 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.212 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88b1b4c6-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.213 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88b1b4c6-36, col_values=(('external_ids', {'iface-id': '88b1b4c6-36ba-46c8-baa2-da5b266af4d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:5b:44', 'vm-uuid': '467f8e9a-e166-409e-920c-689fea4ea3f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:10:50 np0005548915 NetworkManager[48882]: <info>  [1765015850.2173] manager: (tap88b1b4c6-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.216 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.223 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.227 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.228 254824 INFO os_vif [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36')#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.229 254824 DEBUG nova.virt.libvirt.vif [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:10:20Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.229 254824 DEBUG nova.network.os_vif_util [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.230 254824 DEBUG nova.network.os_vif_util [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.232 254824 DEBUG nova.virt.libvirt.guest [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] attach device xml: <interface type="ethernet">
Dec  6 05:10:50 np0005548915 nova_compute[254819]:  <mac address="fa:16:3e:9c:5b:44"/>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:  <model type="virtio"/>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:  <mtu size="1442"/>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:  <target dev="tap88b1b4c6-36"/>
Dec  6 05:10:50 np0005548915 nova_compute[254819]: </interface>
Dec  6 05:10:50 np0005548915 nova_compute[254819]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 05:10:50 np0005548915 kernel: tap88b1b4c6-36: entered promiscuous mode
Dec  6 05:10:50 np0005548915 NetworkManager[48882]: <info>  [1765015850.2481] manager: (tap88b1b4c6-36): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Dec  6 05:10:50 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:50Z|00072|binding|INFO|Claiming lport 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 for this chassis.
Dec  6 05:10:50 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:50Z|00073|binding|INFO|88b1b4c6-36ba-46c8-baa2-da5b266af4d1: Claiming fa:16:3e:9c:5b:44 10.100.0.24
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.250 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.258 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:5b:44 10.100.0.24'], port_security=['fa:16:3e:9c:5b:44 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '467f8e9a-e166-409e-920c-689fea4ea3f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1e7cc18e-31f3-4bdb-821d-1683a210c530', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f5b6720-4878-43e8-9823-306ee6c3568e, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=88b1b4c6-36ba-46c8-baa2-da5b266af4d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.259 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 in datapath af11da89-c29d-4ef1-80d5-4b619757b0ff bound to our chassis#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.261 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network af11da89-c29d-4ef1-80d5-4b619757b0ff#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.277 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[729d96c5-5f8c-4cae-a435-2987bbfb7bd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.278 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaf11da89-c1 in ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.280 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaf11da89-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.281 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[076e7d09-fd67-48bb-897e-8a882a943f5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.282 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b9ce65-9f21-46cd-9c0e-d246d488cdb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.296 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[da7178b2-ac6d-4f6a-a87b-b56c4b380053]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 systemd-udevd[270097]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.322 254824 DEBUG nova.virt.libvirt.driver [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.323 254824 DEBUG nova.virt.libvirt.driver [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.323 254824 DEBUG nova.virt.libvirt.driver [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:64:9d:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.323 254824 DEBUG nova.virt.libvirt.driver [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:9c:5b:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.326 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.325 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8b72e194-6753-4f67-a646-bd9e52d85640]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 NetworkManager[48882]: <info>  [1765015850.3295] device (tap88b1b4c6-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 05:10:50 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:50Z|00074|binding|INFO|Setting lport 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 ovn-installed in OVS
Dec  6 05:10:50 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:50Z|00075|binding|INFO|Setting lport 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 up in Southbound
Dec  6 05:10:50 np0005548915 NetworkManager[48882]: <info>  [1765015850.3305] device (tap88b1b4c6-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.331 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.357 254824 DEBUG nova.virt.libvirt.guest [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:10:50 np0005548915 nova_compute[254819]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:  <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:  <nova:creationTime>2025-12-06 10:10:50</nova:creationTime>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:  <nova:flavor name="m1.nano">
Dec  6 05:10:50 np0005548915 nova_compute[254819]:    <nova:memory>128</nova:memory>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:    <nova:disk>1</nova:disk>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:    <nova:swap>0</nova:swap>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:    <nova:vcpus>1</nova:vcpus>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:  </nova:flavor>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:  <nova:owner>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:    <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:    <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:  </nova:owner>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:  <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:  <nova:ports>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:    <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec  6 05:10:50 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:    <nova:port uuid="88b1b4c6-36ba-46c8-baa2-da5b266af4d1">
Dec  6 05:10:50 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:10:50 np0005548915 nova_compute[254819]:  </nova:ports>
Dec  6 05:10:50 np0005548915 nova_compute[254819]: </nova:instance>
Dec  6 05:10:50 np0005548915 nova_compute[254819]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.366 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cb642a-b045-43bf-a5de-70926e889be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.371 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4b999e-3aa0-4db8-98af-b1c4d7945864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 systemd-udevd[270099]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:10:50 np0005548915 NetworkManager[48882]: <info>  [1765015850.3719] manager: (tapaf11da89-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.378 254824 DEBUG oslo_concurrency.lockutils [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-467f8e9a-e166-409e-920c-689fea4ea3f6-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.400 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[453b2ea9-59bf-41b9-ad56-1aa72752722c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.403 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[55f98480-50f2-4a9f-b744-568e1f91e34b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 NetworkManager[48882]: <info>  [1765015850.4241] device (tapaf11da89-c0): carrier: link connected
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.432 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[03b06b54-9702-4b7e-80a8-586c1653cefe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.449 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[54e7aa72-08d8-4232-8434-d185af79fb22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaf11da89-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:fe:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422091, 'reachable_time': 33829, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270121, 'error': None, 'target': 'ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.461 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0e12f130-b6ba-4f52-82e1-e0bfb2546120]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:fe2e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422091, 'tstamp': 422091}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270122, 'error': None, 'target': 'ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.479 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[758e41a8-4976-4934-acc5-4da1bfa7bf97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaf11da89-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:fe:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422091, 'reachable_time': 33829, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270123, 'error': None, 'target': 'ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.507 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1fdc9e-63b2-4321-86cc-87217f4326d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.559 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[26102cdd-05e8-46e2-9f49-7a8eed779aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.560 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf11da89-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.561 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.561 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaf11da89-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.563 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:50 np0005548915 kernel: tapaf11da89-c0: entered promiscuous mode
Dec  6 05:10:50 np0005548915 NetworkManager[48882]: <info>  [1765015850.5645] manager: (tapaf11da89-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.566 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.566 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaf11da89-c0, col_values=(('external_ids', {'iface-id': '11d93e6a-f3e6-434c-bb3f-39cb96f417cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:10:50 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:50Z|00076|binding|INFO|Releasing lport 11d93e6a-f3e6-434c-bb3f-39cb96f417cf from this chassis (sb_readonly=0)
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.587 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.588 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/af11da89-c29d-4ef1-80d5-4b619757b0ff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/af11da89-c29d-4ef1-80d5-4b619757b0ff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.589 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[58b1cd74-e926-4793-b197-93746faa3cdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.590 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: global
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    log         /dev/log local0 debug
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    log-tag     haproxy-metadata-proxy-af11da89-c29d-4ef1-80d5-4b619757b0ff
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    user        root
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    group       root
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    maxconn     1024
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    pidfile     /var/lib/neutron/external/pids/af11da89-c29d-4ef1-80d5-4b619757b0ff.pid.haproxy
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    daemon
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: defaults
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    log global
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    mode http
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    option httplog
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    option dontlognull
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    option http-server-close
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    option forwardfor
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    retries                 3
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    timeout http-request    30s
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    timeout connect         30s
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    timeout client          32s
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    timeout server          32s
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    timeout http-keep-alive 30s
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: listen listener
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    bind 169.254.169.254:80
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]:    http-request add-header X-OVN-Network-ID af11da89-c29d-4ef1-80d5-4b619757b0ff
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 05:10:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.591 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'env', 'PROCESS_TAG=haproxy-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/af11da89-c29d-4ef1-80d5-4b619757b0ff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 05:10:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:50] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Dec  6 05:10:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:50] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Dec  6 05:10:50 np0005548915 nova_compute[254819]: 2025-12-06 10:10:50.943 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:50 np0005548915 podman[270155]: 2025-12-06 10:10:50.956564699 +0000 UTC m=+0.054610710 container create 351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  6 05:10:50 np0005548915 systemd[1]: Started libpod-conmon-351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2.scope.
Dec  6 05:10:51 np0005548915 podman[270155]: 2025-12-06 10:10:50.926230258 +0000 UTC m=+0.024276319 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  6 05:10:51 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:10:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:51 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:51 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65d556cdea8f60788a15893bebf04b8e9b5c638ceed2e80d5a7f1c58c122409c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 05:10:51 np0005548915 podman[270155]: 2025-12-06 10:10:51.042036344 +0000 UTC m=+0.140082375 container init 351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:10:51 np0005548915 podman[270155]: 2025-12-06 10:10:51.047440908 +0000 UTC m=+0.145486919 container start 351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec  6 05:10:51 np0005548915 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [NOTICE]   (270175) : New worker (270177) forked
Dec  6 05:10:51 np0005548915 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [NOTICE]   (270175) : Loading success.
Dec  6 05:10:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:51.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:51.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:51 np0005548915 nova_compute[254819]: 2025-12-06 10:10:51.939 254824 DEBUG nova.compute.manager [req-36770c0c-efab-49a7-ba2f-3bef0b768c1f req-b40e13f9-7f61-4a46-b96b-0158507b863e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:10:51 np0005548915 nova_compute[254819]: 2025-12-06 10:10:51.940 254824 DEBUG oslo_concurrency.lockutils [req-36770c0c-efab-49a7-ba2f-3bef0b768c1f req-b40e13f9-7f61-4a46-b96b-0158507b863e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:10:51 np0005548915 nova_compute[254819]: 2025-12-06 10:10:51.940 254824 DEBUG oslo_concurrency.lockutils [req-36770c0c-efab-49a7-ba2f-3bef0b768c1f req-b40e13f9-7f61-4a46-b96b-0158507b863e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:10:51 np0005548915 nova_compute[254819]: 2025-12-06 10:10:51.940 254824 DEBUG oslo_concurrency.lockutils [req-36770c0c-efab-49a7-ba2f-3bef0b768c1f req-b40e13f9-7f61-4a46-b96b-0158507b863e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:10:51 np0005548915 nova_compute[254819]: 2025-12-06 10:10:51.941 254824 DEBUG nova.compute.manager [req-36770c0c-efab-49a7-ba2f-3bef0b768c1f req-b40e13f9-7f61-4a46-b96b-0158507b863e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:10:51 np0005548915 nova_compute[254819]: 2025-12-06 10:10:51.941 254824 WARNING nova.compute.manager [req-36770c0c-efab-49a7-ba2f-3bef0b768c1f req-b40e13f9-7f61-4a46-b96b-0158507b863e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 for instance with vm_state active and task_state None.#033[00m
Dec  6 05:10:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v892: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 16 KiB/s wr, 1 op/s
Dec  6 05:10:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:53 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:53 np0005548915 nova_compute[254819]: 2025-12-06 10:10:53.124 254824 DEBUG nova.network.neutron [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated VIF entry in instance network info cache for port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:10:53 np0005548915 nova_compute[254819]: 2025-12-06 10:10:53.124 254824 DEBUG nova.network.neutron [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:10:53 np0005548915 nova_compute[254819]: 2025-12-06 10:10:53.145 254824 DEBUG oslo_concurrency.lockutils [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:10:53 np0005548915 nova_compute[254819]: 2025-12-06 10:10:53.146 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:10:53 np0005548915 nova_compute[254819]: 2025-12-06 10:10:53.147 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 05:10:53 np0005548915 nova_compute[254819]: 2025-12-06 10:10:53.147 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:10:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:10:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:53.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:10:53 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:53Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:5b:44 10.100.0.24
Dec  6 05:10:53 np0005548915 ovn_controller[152417]: 2025-12-06T10:10:53Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:5b:44 10.100.0.24
Dec  6 05:10:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:53.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:10:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:10:54 np0005548915 nova_compute[254819]: 2025-12-06 10:10:54.005 254824 DEBUG nova.compute.manager [req-4ac39618-58c9-4bc1-b947-af8a00cba19e req-3987cc29-402c-4c10-bd4b-09b6552a3849 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:10:54 np0005548915 nova_compute[254819]: 2025-12-06 10:10:54.006 254824 DEBUG oslo_concurrency.lockutils [req-4ac39618-58c9-4bc1-b947-af8a00cba19e req-3987cc29-402c-4c10-bd4b-09b6552a3849 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:10:54 np0005548915 nova_compute[254819]: 2025-12-06 10:10:54.006 254824 DEBUG oslo_concurrency.lockutils [req-4ac39618-58c9-4bc1-b947-af8a00cba19e req-3987cc29-402c-4c10-bd4b-09b6552a3849 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:10:54 np0005548915 nova_compute[254819]: 2025-12-06 10:10:54.006 254824 DEBUG oslo_concurrency.lockutils [req-4ac39618-58c9-4bc1-b947-af8a00cba19e req-3987cc29-402c-4c10-bd4b-09b6552a3849 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:10:54 np0005548915 nova_compute[254819]: 2025-12-06 10:10:54.007 254824 DEBUG nova.compute.manager [req-4ac39618-58c9-4bc1-b947-af8a00cba19e req-3987cc29-402c-4c10-bd4b-09b6552a3849 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:10:54 np0005548915 nova_compute[254819]: 2025-12-06 10:10:54.007 254824 WARNING nova.compute.manager [req-4ac39618-58c9-4bc1-b947-af8a00cba19e req-3987cc29-402c-4c10-bd4b-09b6552a3849 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 for instance with vm_state active and task_state None.#033[00m
Dec  6 05:10:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:10:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:10:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:10:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:10:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:10:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:10:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v893: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 20 KiB/s wr, 1 op/s
Dec  6 05:10:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:54.242 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:10:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:54.243 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:10:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:10:54.244 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:10:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:10:54 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec  6 05:10:54 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:54.984140) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 05:10:54 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec  6 05:10:54 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015854984200, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2130, "num_deletes": 251, "total_data_size": 4329660, "memory_usage": 4393624, "flush_reason": "Manual Compaction"}
Dec  6 05:10:54 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015855008084, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 4184524, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24856, "largest_seqno": 26985, "table_properties": {"data_size": 4174854, "index_size": 6100, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20016, "raw_average_key_size": 20, "raw_value_size": 4155559, "raw_average_value_size": 4236, "num_data_blocks": 267, "num_entries": 981, "num_filter_entries": 981, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015644, "oldest_key_time": 1765015644, "file_creation_time": 1765015854, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 24027 microseconds, and 9523 cpu microseconds.
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.008166) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 4184524 bytes OK
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.008197) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.026893) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.026925) EVENT_LOG_v1 {"time_micros": 1765015855026917, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.026949) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4320895, prev total WAL file size 4320895, number of live WAL files 2.
Dec  6 05:10:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:55 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.029286) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(4086KB)], [56(12MB)]
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015855029364, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 17518425, "oldest_snapshot_seqno": -1}
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5891 keys, 15448072 bytes, temperature: kUnknown
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015855183607, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 15448072, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15407288, "index_size": 24930, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 149649, "raw_average_key_size": 25, "raw_value_size": 15299356, "raw_average_value_size": 2597, "num_data_blocks": 1018, "num_entries": 5891, "num_filter_entries": 5891, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.184092) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 15448072 bytes
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.185725) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.5 rd, 100.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.7 +0.0 blob) out(14.7 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 6411, records dropped: 520 output_compression: NoCompression
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.185761) EVENT_LOG_v1 {"time_micros": 1765015855185743, "job": 30, "event": "compaction_finished", "compaction_time_micros": 154363, "compaction_time_cpu_micros": 55100, "output_level": 6, "num_output_files": 1, "total_output_size": 15448072, "num_input_records": 6411, "num_output_records": 5891, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015855187424, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015855192155, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.029078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.192345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.192355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.192362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.192365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:10:55 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.192368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:10:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:55.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.216 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:10:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:55.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.803 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.824 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.824 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.824 254824 DEBUG oslo_concurrency.lockutils [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.825 254824 DEBUG nova.network.neutron [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing network info cache for port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.825 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.826 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.826 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.826 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.826 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.826 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.827 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.827 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.853 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.854 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.854 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 05:10:55 np0005548915 nova_compute[254819]: 2025-12-06 10:10:55.994 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:10:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v894: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 8.3 KiB/s wr, 1 op/s
Dec  6 05:10:56 np0005548915 nova_compute[254819]: 2025-12-06 10:10:56.365 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:10:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:57 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:57.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:57.299Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:10:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:57.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:57 np0005548915 nova_compute[254819]: 2025-12-06 10:10:57.574 254824 DEBUG nova.network.neutron [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated VIF entry in instance network info cache for port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:10:57 np0005548915 nova_compute[254819]: 2025-12-06 10:10:57.575 254824 DEBUG nova.network.neutron [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:10:57 np0005548915 nova_compute[254819]: 2025-12-06 10:10:57.592 254824 DEBUG oslo_concurrency.lockutils [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:10:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v895: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 8.7 KiB/s wr, 1 op/s
Dec  6 05:10:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:59.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:10:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:59.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:10:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:59 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:10:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:10:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:59.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:10:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:10:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:10:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:59.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:10:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:11:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v896: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 4.0 KiB/s wr, 1 op/s
Dec  6 05:11:00 np0005548915 nova_compute[254819]: 2025-12-06 10:11:00.219 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:00] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Dec  6 05:11:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:00] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Dec  6 05:11:00 np0005548915 nova_compute[254819]: 2025-12-06 10:11:00.990 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:01 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:11:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:01.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:11:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:11:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:01.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:11:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v897: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 4.0 KiB/s wr, 1 op/s
Dec  6 05:11:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:03 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:11:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:03.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:11:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:03.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v898: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  6 05:11:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:11:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:05 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:05.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:05 np0005548915 nova_compute[254819]: 2025-12-06 10:11:05.222 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:05.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:05 np0005548915 nova_compute[254819]: 2025-12-06 10:11:05.993 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v899: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:11:06 np0005548915 podman[270227]: 2025-12-06 10:11:06.485735175 +0000 UTC m=+0.107132705 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:11:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:07 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:07.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:07.301Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:11:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:07.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v900: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec  6 05:11:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:11:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:11:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:09.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:11:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:09 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:09.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:09.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:11:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v901: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec  6 05:11:10 np0005548915 nova_compute[254819]: 2025-12-06 10:11:10.226 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:10] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec  6 05:11:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:10] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec  6 05:11:10 np0005548915 nova_compute[254819]: 2025-12-06 10:11:10.996 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:11 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:11:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:11.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:11:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:11:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:11.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:11:11 np0005548915 podman[270253]: 2025-12-06 10:11:11.500365988 +0000 UTC m=+0.135166144 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:11:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v902: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec  6 05:11:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:13 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:13.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:13.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v903: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec  6 05:11:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:11:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:15 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:15.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:15 np0005548915 nova_compute[254819]: 2025-12-06 10:11:15.229 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:15.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:16 np0005548915 nova_compute[254819]: 2025-12-06 10:11:16.000 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v904: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Dec  6 05:11:16 np0005548915 podman[270309]: 2025-12-06 10:11:16.470037318 +0000 UTC m=+0.088876956 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  6 05:11:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:17 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:17.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:17.302Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:11:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:17.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v905: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 75 op/s
Dec  6 05:11:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:19.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:11:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:11:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:19.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:11:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:19.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:11:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v906: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 KiB/s wr, 64 op/s
Dec  6 05:11:20 np0005548915 nova_compute[254819]: 2025-12-06 10:11:20.232 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0000d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:20] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec  6 05:11:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:20] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec  6 05:11:21 np0005548915 nova_compute[254819]: 2025-12-06 10:11:21.001 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:21 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:21.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:21.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v907: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 KiB/s wr, 64 op/s
Dec  6 05:11:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:23 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:23.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:23.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:11:23
Dec  6 05:11:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:11:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:11:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.nfs', 'volumes', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta']
Dec  6 05:11:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:11:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:11:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v908: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015184825120466237 of space, bias 1.0, pg target 0.4555447536139871 quantized to 32 (current 32)
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:11:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:11:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:11:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:25 np0005548915 nova_compute[254819]: 2025-12-06 10:11:25.234 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:25.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:25 np0005548915 nova_compute[254819]: 2025-12-06 10:11:25.320 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:11:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:11:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:25.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:11:25 np0005548915 nova_compute[254819]: 2025-12-06 10:11:25.483 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Triggering sync for uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  6 05:11:25 np0005548915 nova_compute[254819]: 2025-12-06 10:11:25.484 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:25 np0005548915 nova_compute[254819]: 2025-12-06 10:11:25.484 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:25 np0005548915 nova_compute[254819]: 2025-12-06 10:11:25.526 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:26 np0005548915 nova_compute[254819]: 2025-12-06 10:11:26.005 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v909: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  6 05:11:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:27 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:27.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:27.302Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:11:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:27.304Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:11:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:11:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:27.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:11:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v910: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec  6 05:11:28 np0005548915 nova_compute[254819]: 2025-12-06 10:11:28.209 254824 DEBUG nova.compute.manager [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-changed-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:11:28 np0005548915 nova_compute[254819]: 2025-12-06 10:11:28.209 254824 DEBUG nova.compute.manager [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing instance network info cache due to event network-changed-88b1b4c6-36ba-46c8-baa2-da5b266af4d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:11:28 np0005548915 nova_compute[254819]: 2025-12-06 10:11:28.210 254824 DEBUG oslo_concurrency.lockutils [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:11:28 np0005548915 nova_compute[254819]: 2025-12-06 10:11:28.210 254824 DEBUG oslo_concurrency.lockutils [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:11:28 np0005548915 nova_compute[254819]: 2025-12-06 10:11:28.211 254824 DEBUG nova.network.neutron [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing network info cache for port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:11:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc000e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:29.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:11:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:29 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:11:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:29.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:11:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:29.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:11:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v911: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec  6 05:11:30 np0005548915 nova_compute[254819]: 2025-12-06 10:11:30.238 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc001900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:30] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec  6 05:11:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:30] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec  6 05:11:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:31 np0005548915 nova_compute[254819]: 2025-12-06 10:11:31.050 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:31.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:31.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v912: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec  6 05:11:32 np0005548915 nova_compute[254819]: 2025-12-06 10:11:32.227 254824 DEBUG nova.network.neutron [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated VIF entry in instance network info cache for port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:11:32 np0005548915 nova_compute[254819]: 2025-12-06 10:11:32.228 254824 DEBUG nova.network.neutron [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:11:32 np0005548915 nova_compute[254819]: 2025-12-06 10:11:32.255 254824 DEBUG oslo_concurrency.lockutils [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:11:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:33 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc001900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:11:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:33.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:11:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:11:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:33.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:11:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v913: 337 pgs: 337 active+clean; 182 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:11:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v914: 337 pgs: 337 active+clean; 182 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 19 KiB/s wr, 14 op/s
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:11:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:11:34 np0005548915 podman[270550]: 2025-12-06 10:11:34.97095114 +0000 UTC m=+0.035186422 container create d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:11:34 np0005548915 systemd[1]: Started libpod-conmon-d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63.scope.
Dec  6 05:11:35 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:11:35 np0005548915 podman[270550]: 2025-12-06 10:11:35.025364694 +0000 UTC m=+0.089599996 container init d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:11:35 np0005548915 podman[270550]: 2025-12-06 10:11:35.034777726 +0000 UTC m=+0.099013018 container start d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  6 05:11:35 np0005548915 podman[270550]: 2025-12-06 10:11:35.037700414 +0000 UTC m=+0.101935716 container attach d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:11:35 np0005548915 lucid_faraday[270568]: 167 167
Dec  6 05:11:35 np0005548915 systemd[1]: libpod-d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63.scope: Deactivated successfully.
Dec  6 05:11:35 np0005548915 conmon[270568]: conmon d61f761404107f8f3ac2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63.scope/container/memory.events
Dec  6 05:11:35 np0005548915 podman[270550]: 2025-12-06 10:11:35.042150132 +0000 UTC m=+0.106385414 container died d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:11:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:35 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:35 np0005548915 podman[270550]: 2025-12-06 10:11:34.955744113 +0000 UTC m=+0.019979415 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:11:35 np0005548915 systemd[1]: var-lib-containers-storage-overlay-86558498d450bae09abf92f6a397e142002ba15df71e3814649f6f961b7cbff1-merged.mount: Deactivated successfully.
Dec  6 05:11:35 np0005548915 podman[270550]: 2025-12-06 10:11:35.086857127 +0000 UTC m=+0.151092409 container remove d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  6 05:11:35 np0005548915 systemd[1]: libpod-conmon-d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63.scope: Deactivated successfully.
Dec  6 05:11:35 np0005548915 nova_compute[254819]: 2025-12-06 10:11:35.241 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:35.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:35 np0005548915 podman[270591]: 2025-12-06 10:11:35.273697561 +0000 UTC m=+0.050819599 container create ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:11:35 np0005548915 systemd[1]: Started libpod-conmon-ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64.scope.
Dec  6 05:11:35 np0005548915 podman[270591]: 2025-12-06 10:11:35.247896012 +0000 UTC m=+0.025018080 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:11:35 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:11:35 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b119782b08159381f6087ba20ba23753269dc82fa9c482aa1ac299faea89ed7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:11:35 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b119782b08159381f6087ba20ba23753269dc82fa9c482aa1ac299faea89ed7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:11:35 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b119782b08159381f6087ba20ba23753269dc82fa9c482aa1ac299faea89ed7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:11:35 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b119782b08159381f6087ba20ba23753269dc82fa9c482aa1ac299faea89ed7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:11:35 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b119782b08159381f6087ba20ba23753269dc82fa9c482aa1ac299faea89ed7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:11:35 np0005548915 podman[270591]: 2025-12-06 10:11:35.38813828 +0000 UTC m=+0.165260328 container init ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:11:35 np0005548915 podman[270591]: 2025-12-06 10:11:35.395106555 +0000 UTC m=+0.172228583 container start ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 05:11:35 np0005548915 podman[270591]: 2025-12-06 10:11:35.399233506 +0000 UTC m=+0.176355554 container attach ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 05:11:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:35.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:11:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:11:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:11:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:11:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:11:35 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:11:35 np0005548915 objective_bouman[270609]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:11:35 np0005548915 objective_bouman[270609]: --> All data devices are unavailable
Dec  6 05:11:35 np0005548915 systemd[1]: libpod-ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64.scope: Deactivated successfully.
Dec  6 05:11:35 np0005548915 podman[270591]: 2025-12-06 10:11:35.750664698 +0000 UTC m=+0.527786766 container died ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:11:35 np0005548915 systemd[1]: var-lib-containers-storage-overlay-8b119782b08159381f6087ba20ba23753269dc82fa9c482aa1ac299faea89ed7-merged.mount: Deactivated successfully.
Dec  6 05:11:35 np0005548915 podman[270591]: 2025-12-06 10:11:35.801500617 +0000 UTC m=+0.578622645 container remove ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  6 05:11:35 np0005548915 systemd[1]: libpod-conmon-ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64.scope: Deactivated successfully.
Dec  6 05:11:36 np0005548915 nova_compute[254819]: 2025-12-06 10:11:36.089 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:36 np0005548915 podman[270729]: 2025-12-06 10:11:36.343302238 +0000 UTC m=+0.039236550 container create 77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_poincare, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  6 05:11:36 np0005548915 systemd[1]: Started libpod-conmon-77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3.scope.
Dec  6 05:11:36 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:11:36 np0005548915 podman[270729]: 2025-12-06 10:11:36.419324269 +0000 UTC m=+0.115258591 container init 77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  6 05:11:36 np0005548915 podman[270729]: 2025-12-06 10:11:36.326790216 +0000 UTC m=+0.022724548 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:11:36 np0005548915 podman[270729]: 2025-12-06 10:11:36.426294405 +0000 UTC m=+0.122228717 container start 77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_poincare, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  6 05:11:36 np0005548915 quizzical_poincare[270746]: 167 167
Dec  6 05:11:36 np0005548915 systemd[1]: libpod-77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3.scope: Deactivated successfully.
Dec  6 05:11:36 np0005548915 podman[270729]: 2025-12-06 10:11:36.43060148 +0000 UTC m=+0.126535812 container attach 77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_poincare, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  6 05:11:36 np0005548915 podman[270729]: 2025-12-06 10:11:36.43097507 +0000 UTC m=+0.126909382 container died 77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  6 05:11:36 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d510af21d9173d4e932d516d9ee2d9c51b143154d591ea8b12a3f978879cfd95-merged.mount: Deactivated successfully.
Dec  6 05:11:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v915: 337 pgs: 337 active+clean; 182 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 19 KiB/s wr, 14 op/s
Dec  6 05:11:36 np0005548915 podman[270729]: 2025-12-06 10:11:36.463329495 +0000 UTC m=+0.159263807 container remove 77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_poincare, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:11:36 np0005548915 systemd[1]: libpod-conmon-77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3.scope: Deactivated successfully.
Dec  6 05:11:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc001900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:36 np0005548915 podman[270770]: 2025-12-06 10:11:36.68316113 +0000 UTC m=+0.069106408 container create 77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shirley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 05:11:36 np0005548915 systemd[1]: Started libpod-conmon-77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1.scope.
Dec  6 05:11:36 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:11:36 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda452e09c46447772048cadbddf62ec01b5ef7166f4bb43bdec17fba1fcf56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:11:36 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda452e09c46447772048cadbddf62ec01b5ef7166f4bb43bdec17fba1fcf56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:11:36 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda452e09c46447772048cadbddf62ec01b5ef7166f4bb43bdec17fba1fcf56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:11:36 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda452e09c46447772048cadbddf62ec01b5ef7166f4bb43bdec17fba1fcf56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:11:36 np0005548915 podman[270770]: 2025-12-06 10:11:36.63710822 +0000 UTC m=+0.023053518 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:11:36 np0005548915 podman[270770]: 2025-12-06 10:11:36.782384572 +0000 UTC m=+0.168329880 container init 77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 05:11:36 np0005548915 podman[270770]: 2025-12-06 10:11:36.790356295 +0000 UTC m=+0.176301573 container start 77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shirley, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  6 05:11:36 np0005548915 podman[270770]: 2025-12-06 10:11:36.811156821 +0000 UTC m=+0.197102109 container attach 77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  6 05:11:36 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:36.847 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:11:36 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:36.848 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 05:11:36 np0005548915 nova_compute[254819]: 2025-12-06 10:11:36.850 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:36 np0005548915 podman[270784]: 2025-12-06 10:11:36.872715267 +0000 UTC m=+0.151902321 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  6 05:11:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:37 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]: {
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:    "1": [
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:        {
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:            "devices": [
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:                "/dev/loop3"
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:            ],
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:            "lv_name": "ceph_lv0",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:            "lv_size": "21470642176",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:            "name": "ceph_lv0",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:            "tags": {
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:                "ceph.cluster_name": "ceph",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:                "ceph.crush_device_class": "",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:                "ceph.encrypted": "0",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:                "ceph.osd_id": "1",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:                "ceph.type": "block",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:                "ceph.vdo": "0",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:                "ceph.with_tpm": "0"
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:            },
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:            "type": "block",
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:            "vg_name": "ceph_vg0"
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:        }
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]:    ]
Dec  6 05:11:37 np0005548915 romantic_shirley[270787]: }
Dec  6 05:11:37 np0005548915 systemd[1]: libpod-77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1.scope: Deactivated successfully.
Dec  6 05:11:37 np0005548915 podman[270770]: 2025-12-06 10:11:37.098385707 +0000 UTC m=+0.484330985 container died 77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  6 05:11:37 np0005548915 systemd[1]: var-lib-containers-storage-overlay-fbda452e09c46447772048cadbddf62ec01b5ef7166f4bb43bdec17fba1fcf56-merged.mount: Deactivated successfully.
Dec  6 05:11:37 np0005548915 podman[270770]: 2025-12-06 10:11:37.189637807 +0000 UTC m=+0.575583085 container remove 77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shirley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 05:11:37 np0005548915 systemd[1]: libpod-conmon-77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1.scope: Deactivated successfully.
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.206 254824 DEBUG oslo_concurrency.lockutils [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "interface-467f8e9a-e166-409e-920c-689fea4ea3f6-88b1b4c6-36ba-46c8-baa2-da5b266af4d1" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.207 254824 DEBUG oslo_concurrency.lockutils [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-467f8e9a-e166-409e-920c-689fea4ea3f6-88b1b4c6-36ba-46c8-baa2-da5b266af4d1" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.226 254824 DEBUG nova.objects.instance [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'flavor' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:11:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:37.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.265 254824 DEBUG nova.virt.libvirt.vif [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:10:20Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.265 254824 DEBUG nova.network.os_vif_util [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.266 254824 DEBUG nova.network.os_vif_util [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.269 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.271 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.273 254824 DEBUG nova.virt.libvirt.driver [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Attempting to detach device tap88b1b4c6-36 from instance 467f8e9a-e166-409e-920c-689fea4ea3f6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.273 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] detach device xml: <interface type="ethernet">
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <mac address="fa:16:3e:9c:5b:44"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <model type="virtio"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <mtu size="1442"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <target dev="tap88b1b4c6-36"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]: </interface>
Dec  6 05:11:37 np0005548915 nova_compute[254819]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.279 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.281 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface>not found in domain: <domain type='kvm' id='4'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <name>instance-00000006</name>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <uuid>467f8e9a-e166-409e-920c-689fea4ea3f6</uuid>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:creationTime>2025-12-06 10:10:50</nova:creationTime>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:flavor name="m1.nano">
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:memory>128</nova:memory>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:disk>1</nova:disk>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:swap>0</nova:swap>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:vcpus>1</nova:vcpus>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </nova:flavor>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:owner>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </nova:owner>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:ports>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:port uuid="88b1b4c6-36ba-46c8-baa2-da5b266af4d1">
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </nova:ports>
Dec  6 05:11:37 np0005548915 nova_compute[254819]: </nova:instance>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <memory unit='KiB'>131072</memory>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <vcpu placement='static'>1</vcpu>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <resource>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <partition>/machine</partition>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </resource>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <sysinfo type='smbios'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <entry name='manufacturer'>RDO</entry>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <entry name='serial'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <entry name='uuid'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <entry name='family'>Virtual Machine</entry>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <boot dev='hd'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <smbios mode='sysinfo'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <vmcoreinfo state='on'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <model fallback='forbid'>EPYC-Rome</model>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <vendor>AMD</vendor>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='x2apic'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc-deadline'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='hypervisor'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc_adjust'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='spec-ctrl'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='stibp'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='ssbd'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='cmp_legacy'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='overflow-recov'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='succor'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='ibrs'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='amd-ssbd'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='virt-ssbd'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='lbrv'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='tsc-scale'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='vmcb-clean'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='flushbyasid'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pause-filter'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pfthreshold'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svme-addr-chk'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='lfence-always-serializing'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='xsaves'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svm'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='topoext'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='npt'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='nrip-save'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <clock offset='utc'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <timer name='hpet' present='no'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <on_poweroff>destroy</on_poweroff>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <on_reboot>restart</on_reboot>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <on_crash>destroy</on_crash>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <disk type='network' device='disk'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk' index='2'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target dev='vda' bus='virtio'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='virtio-disk0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <disk type='network' device='cdrom'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config' index='1'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target dev='sda' bus='sata'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <readonly/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='sata0-0-0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pcie.0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='1' port='0x10'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.1'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='2' port='0x11'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.2'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='3' port='0x12'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.3'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='4' port='0x13'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.4'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='5' port='0x14'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.5'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='6' port='0x15'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.6'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='7' port='0x16'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.7'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='8' port='0x17'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.8'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='9' port='0x18'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.9'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='10' port='0x19'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.10'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='11' port='0x1a'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.11'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='12' port='0x1b'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.12'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='13' port='0x1c'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.13'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='14' port='0x1d'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.14'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='15' port='0x1e'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.15'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='16' port='0x1f'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.16'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='17' port='0x20'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.17'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='18' port='0x21'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.18'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='19' port='0x22'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.19'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='20' port='0x23'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.20'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='21' port='0x24'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.21'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='22' port='0x25'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.22'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='23' port='0x26'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.23'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='24' port='0x27'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.24'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='25' port='0x28'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.25'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-pci-bridge'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.26'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='usb'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='sata' index='0'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='ide'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <interface type='ethernet'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <mac address='fa:16:3e:64:9d:d4'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target dev='tapec2bc9a6-15'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model type='virtio'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <mtu size='1442'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='net0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <interface type='ethernet'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <mac address='fa:16:3e:9c:5b:44'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target dev='tap88b1b4c6-36'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model type='virtio'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <mtu size='1442'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='net1'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <serial type='pty'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target type='isa-serial' port='0'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <model name='isa-serial'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      </target>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target type='serial' port='0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </console>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <input type='tablet' bus='usb'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='input0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='usb' bus='0' port='1'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <input type='mouse' bus='ps2'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='input1'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <input type='keyboard' bus='ps2'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='input2'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <listen type='address' address='::0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </graphics>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <audio id='1' type='none'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='video0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <watchdog model='itco' action='reset'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='watchdog0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </watchdog>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <memballoon model='virtio'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <stats period='10'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='balloon0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <rng model='virtio'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <backend model='random'>/dev/urandom</backend>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='rng0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <label>system_u:system_r:svirt_t:s0:c464,c770</label>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c464,c770</imagelabel>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <label>+107:+107</label>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <imagelabel>+107:+107</imagelabel>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:11:37 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:11:37 np0005548915 nova_compute[254819]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.282 254824 INFO nova.virt.libvirt.driver [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully detached device tap88b1b4c6-36 from instance 467f8e9a-e166-409e-920c-689fea4ea3f6 from the persistent domain config.#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.282 254824 DEBUG nova.virt.libvirt.driver [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] (1/8): Attempting to detach device tap88b1b4c6-36 with device alias net1 from instance 467f8e9a-e166-409e-920c-689fea4ea3f6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.282 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] detach device xml: <interface type="ethernet">
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <mac address="fa:16:3e:9c:5b:44"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <model type="virtio"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <mtu size="1442"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <target dev="tap88b1b4c6-36"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]: </interface>
Dec  6 05:11:37 np0005548915 nova_compute[254819]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 05:11:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:37.305Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:11:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:37.305Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:11:37 np0005548915 kernel: tap88b1b4c6-36 (unregistering): left promiscuous mode
Dec  6 05:11:37 np0005548915 NetworkManager[48882]: <info>  [1765015897.3349] device (tap88b1b4c6-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.345 254824 DEBUG nova.virt.libvirt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Received event <DeviceRemovedEvent: 1765015897.3447344, 467f8e9a-e166-409e-920c-689fea4ea3f6 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.346 254824 DEBUG nova.virt.libvirt.driver [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Start waiting for the detach event from libvirt for device tap88b1b4c6-36 with device alias net1 for instance 467f8e9a-e166-409e-920c-689fea4ea3f6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.346 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 05:11:37 np0005548915 ovn_controller[152417]: 2025-12-06T10:11:37Z|00077|binding|INFO|Releasing lport 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 from this chassis (sb_readonly=0)
Dec  6 05:11:37 np0005548915 ovn_controller[152417]: 2025-12-06T10:11:37Z|00078|binding|INFO|Setting lport 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 down in Southbound
Dec  6 05:11:37 np0005548915 ovn_controller[152417]: 2025-12-06T10:11:37Z|00079|binding|INFO|Removing iface tap88b1b4c6-36 ovn-installed in OVS
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.408 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.411 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface>not found in domain: <domain type='kvm' id='4'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <name>instance-00000006</name>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <uuid>467f8e9a-e166-409e-920c-689fea4ea3f6</uuid>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:creationTime>2025-12-06 10:10:50</nova:creationTime>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:flavor name="m1.nano">
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:memory>128</nova:memory>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:disk>1</nova:disk>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:swap>0</nova:swap>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:vcpus>1</nova:vcpus>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </nova:flavor>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:owner>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </nova:owner>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:ports>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:port uuid="88b1b4c6-36ba-46c8-baa2-da5b266af4d1">
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </nova:ports>
Dec  6 05:11:37 np0005548915 nova_compute[254819]: </nova:instance>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <memory unit='KiB'>131072</memory>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <vcpu placement='static'>1</vcpu>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <resource>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <partition>/machine</partition>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </resource>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <sysinfo type='smbios'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <entry name='manufacturer'>RDO</entry>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <entry name='serial'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <entry name='uuid'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <entry name='family'>Virtual Machine</entry>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <boot dev='hd'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <smbios mode='sysinfo'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <vmcoreinfo state='on'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <model fallback='forbid'>EPYC-Rome</model>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <vendor>AMD</vendor>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='x2apic'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc-deadline'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='hypervisor'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc_adjust'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='spec-ctrl'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='stibp'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='ssbd'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='cmp_legacy'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='overflow-recov'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='succor'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='ibrs'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='amd-ssbd'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='virt-ssbd'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='lbrv'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='tsc-scale'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='vmcb-clean'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='flushbyasid'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pause-filter'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pfthreshold'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svme-addr-chk'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='lfence-always-serializing'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='xsaves'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svm'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='require' name='topoext'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='npt'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <feature policy='disable' name='nrip-save'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <clock offset='utc'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <timer name='hpet' present='no'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <on_poweroff>destroy</on_poweroff>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <on_reboot>restart</on_reboot>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <on_crash>destroy</on_crash>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <disk type='network' device='disk'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk' index='2'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target dev='vda' bus='virtio'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='virtio-disk0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <disk type='network' device='cdrom'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config' index='1'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target dev='sda' bus='sata'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <readonly/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='sata0-0-0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pcie.0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='1' port='0x10'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.1'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='2' port='0x11'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.2'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='3' port='0x12'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.3'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='4' port='0x13'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.4'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='5' port='0x14'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.5'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='6' port='0x15'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.6'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='7' port='0x16'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.7'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='8' port='0x17'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.8'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='9' port='0x18'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.9'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='10' port='0x19'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.10'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='11' port='0x1a'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.11'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='12' port='0x1b'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.12'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='13' port='0x1c'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.13'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='14' port='0x1d'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.14'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='15' port='0x1e'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.15'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='16' port='0x1f'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.16'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='17' port='0x20'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.17'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='18' port='0x21'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.18'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='19' port='0x22'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.19'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='20' port='0x23'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.20'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='21' port='0x24'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.21'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='22' port='0x25'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.22'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='23' port='0x26'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.23'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='24' port='0x27'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.24'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target chassis='25' port='0x28'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.25'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model name='pcie-pci-bridge'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='pci.26'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='usb'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <controller type='sata' index='0'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='ide'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <interface type='ethernet'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <mac address='fa:16:3e:64:9d:d4'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target dev='tapec2bc9a6-15'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model type='virtio'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <mtu size='1442'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='net0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <serial type='pty'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target type='isa-serial' port='0'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:        <model name='isa-serial'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      </target>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <target type='serial' port='0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </console>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <input type='tablet' bus='usb'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='input0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='usb' bus='0' port='1'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <input type='mouse' bus='ps2'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='input1'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <input type='keyboard' bus='ps2'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='input2'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <listen type='address' address='::0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </graphics>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <audio id='1' type='none'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='video0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <watchdog model='itco' action='reset'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='watchdog0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </watchdog>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <memballoon model='virtio'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <stats period='10'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='balloon0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:11:37 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.413 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:5b:44 10.100.0.24', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '467f8e9a-e166-409e-920c-689fea4ea3f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f5b6720-4878-43e8-9823-306ee6c3568e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=88b1b4c6-36ba-46c8-baa2-da5b266af4d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <rng model='virtio'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <backend model='random'>/dev/urandom</backend>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <alias name='rng0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <label>system_u:system_r:svirt_t:s0:c464,c770</label>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c464,c770</imagelabel>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <label>+107:+107</label>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <imagelabel>+107:+107</imagelabel>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:11:37 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:11:37 np0005548915 nova_compute[254819]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.411 254824 INFO nova.virt.libvirt.driver [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully detached device tap88b1b4c6-36 from instance 467f8e9a-e166-409e-920c-689fea4ea3f6 from the live domain config.#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.412 254824 DEBUG nova.virt.libvirt.vif [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:10:20Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.412 254824 DEBUG nova.network.os_vif_util [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:11:37 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.414 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 in datapath af11da89-c29d-4ef1-80d5-4b619757b0ff unbound from our chassis#033[00m
Dec  6 05:11:37 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.416 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network af11da89-c29d-4ef1-80d5-4b619757b0ff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.416 254824 DEBUG nova.network.os_vif_util [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.417 254824 DEBUG os_vif [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 05:11:37 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.417 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[469db3e3-627d-4107-b4dc-0ade42ee9b0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:37 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.418 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff namespace which is not needed anymore#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.420 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.420 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88b1b4c6-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.421 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.424 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.426 254824 INFO os_vif [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36')#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.427 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:creationTime>2025-12-06 10:11:37</nova:creationTime>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:flavor name="m1.nano">
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:memory>128</nova:memory>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:disk>1</nova:disk>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:swap>0</nova:swap>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:vcpus>1</nova:vcpus>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </nova:flavor>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:owner>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </nova:owner>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  <nova:ports>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec  6 05:11:37 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:11:37 np0005548915 nova_compute[254819]:  </nova:ports>
Dec  6 05:11:37 np0005548915 nova_compute[254819]: </nova:instance>
Dec  6 05:11:37 np0005548915 nova_compute[254819]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  6 05:11:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:37.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:37 np0005548915 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [NOTICE]   (270175) : haproxy version is 2.8.14-c23fe91
Dec  6 05:11:37 np0005548915 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [NOTICE]   (270175) : path to executable is /usr/sbin/haproxy
Dec  6 05:11:37 np0005548915 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [WARNING]  (270175) : Exiting Master process...
Dec  6 05:11:37 np0005548915 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [WARNING]  (270175) : Exiting Master process...
Dec  6 05:11:37 np0005548915 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [ALERT]    (270175) : Current worker (270177) exited with code 143 (Terminated)
Dec  6 05:11:37 np0005548915 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [WARNING]  (270175) : All workers exited. Exiting... (0)
Dec  6 05:11:37 np0005548915 systemd[1]: libpod-351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2.scope: Deactivated successfully.
Dec  6 05:11:37 np0005548915 conmon[270171]: conmon 351c3f74895b352c68f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2.scope/container/memory.events
Dec  6 05:11:37 np0005548915 podman[270905]: 2025-12-06 10:11:37.564225098 +0000 UTC m=+0.047604203 container died 351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:11:37 np0005548915 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2-userdata-shm.mount: Deactivated successfully.
Dec  6 05:11:37 np0005548915 systemd[1]: var-lib-containers-storage-overlay-65d556cdea8f60788a15893bebf04b8e9b5c638ceed2e80d5a7f1c58c122409c-merged.mount: Deactivated successfully.
Dec  6 05:11:37 np0005548915 podman[270905]: 2025-12-06 10:11:37.604156075 +0000 UTC m=+0.087535190 container cleanup 351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 05:11:37 np0005548915 systemd[1]: libpod-conmon-351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2.scope: Deactivated successfully.
Dec  6 05:11:37 np0005548915 podman[270958]: 2025-12-06 10:11:37.661569259 +0000 UTC m=+0.037469382 container remove 351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 05:11:37 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.668 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[b1d0b25a-84a8-4823-9826-40f087f792be]: (4, ('Sat Dec  6 10:11:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff (351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2)\n351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2\nSat Dec  6 10:11:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff (351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2)\n351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:37 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.670 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3b4cd8-2e6e-49bc-a7c6-9b6fddd88872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:37 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.671 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf11da89-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.673 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:37 np0005548915 kernel: tapaf11da89-c0: left promiscuous mode
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.676 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:37 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.680 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8a418272-80e6-493f-8a13-9c6e8bfb89f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:37 np0005548915 nova_compute[254819]: 2025-12-06 10:11:37.688 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:37 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.699 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e88782-f9af-4918-bf0f-f4f30f058333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:37 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.700 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0f06d820-4828-43e2-a0af-8a940b9eea82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:37 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.719 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[85c3d405-d849-4332-bc83-c6249e5d75bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422084, 'reachable_time': 15380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270991, 'error': None, 'target': 'ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:37 np0005548915 systemd[1]: run-netns-ovnmeta\x2daf11da89\x2dc29d\x2d4ef1\x2d80d5\x2d4b619757b0ff.mount: Deactivated successfully.
Dec  6 05:11:37 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.722 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 05:11:37 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.722 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[56daa704-65fb-4b1c-a8f3-3880788ea376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:37 np0005548915 podman[270992]: 2025-12-06 10:11:37.780318783 +0000 UTC m=+0.044891720 container create 864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ride, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:11:37 np0005548915 systemd[1]: Started libpod-conmon-864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47.scope.
Dec  6 05:11:37 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:11:37 np0005548915 podman[270992]: 2025-12-06 10:11:37.76262976 +0000 UTC m=+0.027202727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:11:37 np0005548915 podman[270992]: 2025-12-06 10:11:37.858064141 +0000 UTC m=+0.122637098 container init 864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ride, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  6 05:11:37 np0005548915 podman[270992]: 2025-12-06 10:11:37.871602263 +0000 UTC m=+0.136175200 container start 864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:11:37 np0005548915 podman[270992]: 2025-12-06 10:11:37.874974683 +0000 UTC m=+0.139547640 container attach 864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ride, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  6 05:11:37 np0005548915 zealous_ride[271008]: 167 167
Dec  6 05:11:37 np0005548915 systemd[1]: libpod-864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47.scope: Deactivated successfully.
Dec  6 05:11:37 np0005548915 podman[270992]: 2025-12-06 10:11:37.878713673 +0000 UTC m=+0.143286630 container died 864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ride, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:11:37 np0005548915 systemd[1]: var-lib-containers-storage-overlay-246626891fab1a4b287eadfbe2b353aed297b4caf91ad2cd442c089fcdf33463-merged.mount: Deactivated successfully.
Dec  6 05:11:37 np0005548915 podman[270992]: 2025-12-06 10:11:37.920601193 +0000 UTC m=+0.185174130 container remove 864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:11:37 np0005548915 systemd[1]: libpod-conmon-864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47.scope: Deactivated successfully.
Dec  6 05:11:38 np0005548915 podman[271034]: 2025-12-06 10:11:38.118753969 +0000 UTC m=+0.056313047 container create 0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_bohr, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  6 05:11:38 np0005548915 systemd[1]: Started libpod-conmon-0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b.scope.
Dec  6 05:11:38 np0005548915 podman[271034]: 2025-12-06 10:11:38.09972622 +0000 UTC m=+0.037285328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.193 254824 DEBUG oslo_concurrency.lockutils [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.194 254824 DEBUG oslo_concurrency.lockutils [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.195 254824 DEBUG nova.network.neutron [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 05:11:38 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:11:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ef27c0b0bd5570ac9eaf09d6c56335c3056deceea4418feec3a1b3b91e5da4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:11:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ef27c0b0bd5570ac9eaf09d6c56335c3056deceea4418feec3a1b3b91e5da4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:11:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ef27c0b0bd5570ac9eaf09d6c56335c3056deceea4418feec3a1b3b91e5da4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:11:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ef27c0b0bd5570ac9eaf09d6c56335c3056deceea4418feec3a1b3b91e5da4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:11:38 np0005548915 podman[271034]: 2025-12-06 10:11:38.217712033 +0000 UTC m=+0.155271211 container init 0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  6 05:11:38 np0005548915 podman[271034]: 2025-12-06 10:11:38.227840953 +0000 UTC m=+0.165400041 container start 0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_bohr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:11:38 np0005548915 podman[271034]: 2025-12-06 10:11:38.233705971 +0000 UTC m=+0.171265089 container attach 0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.328 254824 DEBUG nova.compute.manager [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-deleted-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.328 254824 INFO nova.compute.manager [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Neutron deleted interface 88b1b4c6-36ba-46c8-baa2-da5b266af4d1; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.328 254824 DEBUG nova.network.neutron [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.348 254824 DEBUG nova.objects.instance [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lazy-loading 'system_metadata' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.368 254824 DEBUG nova.objects.instance [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lazy-loading 'flavor' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.386 254824 DEBUG nova.virt.libvirt.vif [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:10:20Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.387 254824 DEBUG nova.network.os_vif_util [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converting VIF {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.388 254824 DEBUG nova.network.os_vif_util [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.393 254824 DEBUG nova.virt.libvirt.guest [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.397 254824 DEBUG nova.virt.libvirt.guest [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface>not found in domain: <domain type='kvm' id='4'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <name>instance-00000006</name>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <uuid>467f8e9a-e166-409e-920c-689fea4ea3f6</uuid>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:creationTime>2025-12-06 10:11:37</nova:creationTime>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:flavor name="m1.nano">
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:memory>128</nova:memory>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:disk>1</nova:disk>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:swap>0</nova:swap>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:vcpus>1</nova:vcpus>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </nova:flavor>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:owner>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </nova:owner>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:ports>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </nova:ports>
Dec  6 05:11:38 np0005548915 nova_compute[254819]: </nova:instance>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <memory unit='KiB'>131072</memory>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <vcpu placement='static'>1</vcpu>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <resource>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <partition>/machine</partition>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </resource>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <sysinfo type='smbios'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <entry name='manufacturer'>RDO</entry>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <entry name='serial'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <entry name='uuid'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <entry name='family'>Virtual Machine</entry>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <boot dev='hd'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <smbios mode='sysinfo'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <vmcoreinfo state='on'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <model fallback='forbid'>EPYC-Rome</model>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <vendor>AMD</vendor>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='x2apic'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc-deadline'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='hypervisor'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc_adjust'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='spec-ctrl'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='stibp'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='ssbd'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='cmp_legacy'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='overflow-recov'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='succor'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='ibrs'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='amd-ssbd'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='virt-ssbd'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='lbrv'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='tsc-scale'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='vmcb-clean'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='flushbyasid'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pause-filter'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pfthreshold'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svme-addr-chk'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='lfence-always-serializing'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='xsaves'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svm'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='topoext'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='npt'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='nrip-save'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <clock offset='utc'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <timer name='hpet' present='no'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <on_poweroff>destroy</on_poweroff>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <on_reboot>restart</on_reboot>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <on_crash>destroy</on_crash>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <disk type='network' device='disk'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk' index='2'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target dev='vda' bus='virtio'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='virtio-disk0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <disk type='network' device='cdrom'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config' index='1'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target dev='sda' bus='sata'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <readonly/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='sata0-0-0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pcie.0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='1' port='0x10'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.1'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='2' port='0x11'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.2'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='3' port='0x12'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.3'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='4' port='0x13'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.4'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='5' port='0x14'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.5'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='6' port='0x15'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.6'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='7' port='0x16'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.7'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='8' port='0x17'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.8'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='9' port='0x18'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.9'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='10' port='0x19'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.10'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='11' port='0x1a'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.11'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='12' port='0x1b'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.12'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='13' port='0x1c'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.13'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='14' port='0x1d'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.14'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='15' port='0x1e'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.15'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='16' port='0x1f'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.16'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='17' port='0x20'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.17'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='18' port='0x21'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.18'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='19' port='0x22'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.19'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='20' port='0x23'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.20'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='21' port='0x24'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.21'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='22' port='0x25'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.22'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='23' port='0x26'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.23'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='24' port='0x27'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.24'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='25' port='0x28'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.25'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-pci-bridge'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.26'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='usb'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='sata' index='0'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='ide'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <interface type='ethernet'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <mac address='fa:16:3e:64:9d:d4'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target dev='tapec2bc9a6-15'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model type='virtio'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <mtu size='1442'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='net0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <serial type='pty'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target type='isa-serial' port='0'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <model name='isa-serial'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      </target>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target type='serial' port='0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </console>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <input type='tablet' bus='usb'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='input0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='usb' bus='0' port='1'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <input type='mouse' bus='ps2'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='input1'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <input type='keyboard' bus='ps2'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='input2'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <listen type='address' address='::0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </graphics>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <audio id='1' type='none'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='video0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <watchdog model='itco' action='reset'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='watchdog0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </watchdog>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <memballoon model='virtio'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <stats period='10'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='balloon0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <rng model='virtio'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <backend model='random'>/dev/urandom</backend>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='rng0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <label>system_u:system_r:svirt_t:s0:c464,c770</label>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c464,c770</imagelabel>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <label>+107:+107</label>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <imagelabel>+107:+107</imagelabel>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:11:38 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:11:38 np0005548915 nova_compute[254819]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.397 254824 DEBUG nova.virt.libvirt.guest [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.404 254824 DEBUG nova.virt.libvirt.guest [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface>not found in domain: <domain type='kvm' id='4'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <name>instance-00000006</name>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <uuid>467f8e9a-e166-409e-920c-689fea4ea3f6</uuid>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:creationTime>2025-12-06 10:11:37</nova:creationTime>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:flavor name="m1.nano">
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:memory>128</nova:memory>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:disk>1</nova:disk>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:swap>0</nova:swap>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:vcpus>1</nova:vcpus>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </nova:flavor>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:owner>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </nova:owner>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:ports>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </nova:ports>
Dec  6 05:11:38 np0005548915 nova_compute[254819]: </nova:instance>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <memory unit='KiB'>131072</memory>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <vcpu placement='static'>1</vcpu>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <resource>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <partition>/machine</partition>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </resource>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <sysinfo type='smbios'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <entry name='manufacturer'>RDO</entry>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <entry name='serial'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <entry name='uuid'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <entry name='family'>Virtual Machine</entry>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <boot dev='hd'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <smbios mode='sysinfo'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <vmcoreinfo state='on'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <model fallback='forbid'>EPYC-Rome</model>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <vendor>AMD</vendor>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='x2apic'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc-deadline'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='hypervisor'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='tsc_adjust'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='spec-ctrl'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='stibp'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='ssbd'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='cmp_legacy'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='overflow-recov'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='succor'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='ibrs'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='amd-ssbd'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='virt-ssbd'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='lbrv'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='tsc-scale'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='vmcb-clean'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='flushbyasid'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pause-filter'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='pfthreshold'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svme-addr-chk'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='lfence-always-serializing'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='xsaves'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='svm'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='require' name='topoext'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='npt'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <feature policy='disable' name='nrip-save'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <clock offset='utc'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <timer name='hpet' present='no'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <on_poweroff>destroy</on_poweroff>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <on_reboot>restart</on_reboot>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <on_crash>destroy</on_crash>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <disk type='network' device='disk'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk' index='2'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target dev='vda' bus='virtio'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='virtio-disk0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <disk type='network' device='cdrom'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <auth username='openstack'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config' index='1'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <host name='192.168.122.100' port='6789'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <host name='192.168.122.102' port='6789'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <host name='192.168.122.101' port='6789'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target dev='sda' bus='sata'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <readonly/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='sata0-0-0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pcie.0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='1' port='0x10'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.1'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='2' port='0x11'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.2'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='3' port='0x12'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.3'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='4' port='0x13'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.4'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='5' port='0x14'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.5'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='6' port='0x15'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.6'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='7' port='0x16'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.7'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='8' port='0x17'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.8'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='9' port='0x18'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.9'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='10' port='0x19'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.10'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='11' port='0x1a'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.11'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='12' port='0x1b'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.12'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='13' port='0x1c'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.13'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='14' port='0x1d'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.14'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='15' port='0x1e'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.15'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='16' port='0x1f'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.16'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='17' port='0x20'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.17'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='18' port='0x21'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.18'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='19' port='0x22'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.19'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='20' port='0x23'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.20'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='21' port='0x24'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.21'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='22' port='0x25'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.22'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='23' port='0x26'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.23'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='24' port='0x27'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.24'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-root-port'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target chassis='25' port='0x28'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.25'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model name='pcie-pci-bridge'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='pci.26'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='usb'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <controller type='sata' index='0'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='ide'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </controller>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <interface type='ethernet'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <mac address='fa:16:3e:64:9d:d4'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target dev='tapec2bc9a6-15'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model type='virtio'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <mtu size='1442'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='net0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <serial type='pty'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target type='isa-serial' port='0'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:        <model name='isa-serial'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      </target>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <source path='/dev/pts/0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <target type='serial' port='0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='serial0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </console>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <input type='tablet' bus='usb'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='input0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='usb' bus='0' port='1'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <input type='mouse' bus='ps2'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='input1'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <input type='keyboard' bus='ps2'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='input2'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </input>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <listen type='address' address='::0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </graphics>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <audio id='1' type='none'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='video0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <watchdog model='itco' action='reset'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='watchdog0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </watchdog>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <memballoon model='virtio'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <stats period='10'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='balloon0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <rng model='virtio'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <backend model='random'>/dev/urandom</backend>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <alias name='rng0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <label>system_u:system_r:svirt_t:s0:c464,c770</label>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c464,c770</imagelabel>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <label>+107:+107</label>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <imagelabel>+107:+107</imagelabel>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </seclabel>
Dec  6 05:11:38 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:11:38 np0005548915 nova_compute[254819]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.404 254824 WARNING nova.virt.libvirt.driver [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Detaching interface fa:16:3e:9c:5b:44 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap88b1b4c6-36' not found.#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.405 254824 DEBUG nova.virt.libvirt.vif [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:10:20Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.406 254824 DEBUG nova.network.os_vif_util [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converting VIF {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.406 254824 DEBUG nova.network.os_vif_util [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.407 254824 DEBUG os_vif [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.409 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.410 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88b1b4c6-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.410 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.412 254824 INFO os_vif [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36')#033[00m
Dec  6 05:11:38 np0005548915 nova_compute[254819]: 2025-12-06 10:11:38.413 254824 DEBUG nova.virt.libvirt.guest [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:creationTime>2025-12-06 10:11:38</nova:creationTime>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:flavor name="m1.nano">
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:memory>128</nova:memory>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:disk>1</nova:disk>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:swap>0</nova:swap>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:vcpus>1</nova:vcpus>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </nova:flavor>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:owner>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </nova:owner>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  <nova:ports>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec  6 05:11:38 np0005548915 nova_compute[254819]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:    </nova:port>
Dec  6 05:11:38 np0005548915 nova_compute[254819]:  </nova:ports>
Dec  6 05:11:38 np0005548915 nova_compute[254819]: </nova:instance>
Dec  6 05:11:38 np0005548915 nova_compute[254819]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  6 05:11:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v916: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 4.9 KiB/s wr, 33 op/s
Dec  6 05:11:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc002d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:38 np0005548915 lvm[271124]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:11:38 np0005548915 lvm[271124]: VG ceph_vg0 finished
Dec  6 05:11:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:11:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:11:38 np0005548915 agitated_bohr[271051]: {}
Dec  6 05:11:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:39.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:11:39 np0005548915 systemd[1]: libpod-0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b.scope: Deactivated successfully.
Dec  6 05:11:39 np0005548915 systemd[1]: libpod-0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b.scope: Consumed 1.252s CPU time.
Dec  6 05:11:39 np0005548915 podman[271034]: 2025-12-06 10:11:39.037581335 +0000 UTC m=+0.975140413 container died 0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_bohr, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  6 05:11:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:39 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:39 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1ef27c0b0bd5570ac9eaf09d6c56335c3056deceea4418feec3a1b3b91e5da4e-merged.mount: Deactivated successfully.
Dec  6 05:11:39 np0005548915 podman[271034]: 2025-12-06 10:11:39.080895993 +0000 UTC m=+1.018455081 container remove 0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_bohr, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Dec  6 05:11:39 np0005548915 systemd[1]: libpod-conmon-0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b.scope: Deactivated successfully.
Dec  6 05:11:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:11:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:11:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:11:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:11:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:39.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:39 np0005548915 nova_compute[254819]: 2025-12-06 10:11:39.300 254824 DEBUG nova.compute.manager [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-unplugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:11:39 np0005548915 nova_compute[254819]: 2025-12-06 10:11:39.301 254824 DEBUG oslo_concurrency.lockutils [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:39 np0005548915 nova_compute[254819]: 2025-12-06 10:11:39.302 254824 DEBUG oslo_concurrency.lockutils [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:39 np0005548915 nova_compute[254819]: 2025-12-06 10:11:39.302 254824 DEBUG oslo_concurrency.lockutils [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:39 np0005548915 nova_compute[254819]: 2025-12-06 10:11:39.302 254824 DEBUG nova.compute.manager [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-unplugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:11:39 np0005548915 nova_compute[254819]: 2025-12-06 10:11:39.302 254824 WARNING nova.compute.manager [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-unplugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 for instance with vm_state active and task_state None.#033[00m
Dec  6 05:11:39 np0005548915 nova_compute[254819]: 2025-12-06 10:11:39.302 254824 DEBUG nova.compute.manager [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:11:39 np0005548915 nova_compute[254819]: 2025-12-06 10:11:39.302 254824 DEBUG oslo_concurrency.lockutils [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:39 np0005548915 nova_compute[254819]: 2025-12-06 10:11:39.303 254824 DEBUG oslo_concurrency.lockutils [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:39 np0005548915 nova_compute[254819]: 2025-12-06 10:11:39.303 254824 DEBUG oslo_concurrency.lockutils [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:39 np0005548915 nova_compute[254819]: 2025-12-06 10:11:39.304 254824 DEBUG nova.compute.manager [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:11:39 np0005548915 nova_compute[254819]: 2025-12-06 10:11:39.304 254824 WARNING nova.compute.manager [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 for instance with vm_state active and task_state None.#033[00m
Dec  6 05:11:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:39.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:39 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:11:39 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:11:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:11:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v917: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 4.9 KiB/s wr, 33 op/s
Dec  6 05:11:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:40] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec  6 05:11:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:40] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec  6 05:11:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:41 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc002d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:41 np0005548915 nova_compute[254819]: 2025-12-06 10:11:41.089 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:41.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:41.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:42 np0005548915 nova_compute[254819]: 2025-12-06 10:11:42.423 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v918: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 4.9 KiB/s wr, 33 op/s
Dec  6 05:11:42 np0005548915 podman[271168]: 2025-12-06 10:11:42.526467101 +0000 UTC m=+0.134090575 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:11:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:42 np0005548915 nova_compute[254819]: 2025-12-06 10:11:42.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:11:42 np0005548915 nova_compute[254819]: 2025-12-06 10:11:42.774 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:42 np0005548915 nova_compute[254819]: 2025-12-06 10:11:42.775 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:42 np0005548915 nova_compute[254819]: 2025-12-06 10:11:42.775 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:42 np0005548915 nova_compute[254819]: 2025-12-06 10:11:42.775 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:11:42 np0005548915 nova_compute[254819]: 2025-12-06 10:11:42.776 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:11:42 np0005548915 ovn_controller[152417]: 2025-12-06T10:11:42Z|00080|binding|INFO|Releasing lport 9f6682d5-4069-4017-8320-2e242e2a8f66 from this chassis (sb_readonly=0)
Dec  6 05:11:42 np0005548915 nova_compute[254819]: 2025-12-06 10:11:42.960 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:43 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.054 254824 INFO nova.network.neutron [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.054 254824 DEBUG nova.network.neutron [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.069 254824 DEBUG oslo_concurrency.lockutils [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.089 254824 DEBUG oslo_concurrency.lockutils [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-467f8e9a-e166-409e-920c-689fea4ea3f6-88b1b4c6-36ba-46c8-baa2-da5b266af4d1" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:11:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1436422311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.247 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:11:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:11:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:43.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.323 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.323 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:11:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:43.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.498 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.500 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4359MB free_disk=59.942543029785156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.500 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.500 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.560 254824 DEBUG nova.compute.manager [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-changed-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.561 254824 DEBUG nova.compute.manager [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing instance network info cache due to event network-changed-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.561 254824 DEBUG oslo_concurrency.lockutils [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.562 254824 DEBUG oslo_concurrency.lockutils [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.562 254824 DEBUG nova.network.neutron [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing network info cache for port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.587 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 467f8e9a-e166-409e-920c-689fea4ea3f6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.588 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.589 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.671 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.672 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.673 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.674 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.674 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.677 254824 INFO nova.compute.manager [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Terminating instance#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.679 254824 DEBUG nova.compute.manager [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.713 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:11:43 np0005548915 kernel: tapec2bc9a6-15 (unregistering): left promiscuous mode
Dec  6 05:11:43 np0005548915 NetworkManager[48882]: <info>  [1765015903.7467] device (tapec2bc9a6-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 05:11:43 np0005548915 ovn_controller[152417]: 2025-12-06T10:11:43Z|00081|binding|INFO|Releasing lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b from this chassis (sb_readonly=0)
Dec  6 05:11:43 np0005548915 ovn_controller[152417]: 2025-12-06T10:11:43Z|00082|binding|INFO|Setting lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b down in Southbound
Dec  6 05:11:43 np0005548915 ovn_controller[152417]: 2025-12-06T10:11:43Z|00083|binding|INFO|Removing iface tapec2bc9a6-15 ovn-installed in OVS
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.751 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.766 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:9d:d4 10.100.0.14'], port_security=['fa:16:3e:64:9d:d4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '467f8e9a-e166-409e-920c-689fea4ea3f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04450372-2efd-4ce5-88c7-781d38bca802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=25f33b62-e011-4e1d-9dc2-7927e4f8e59b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:11:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.768 162267 INFO neutron.agent.ovn.metadata.agent [-] Port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b in datapath 4d76af3c-ede9-445b-bea0-ba96a2eaeddd unbound from our chassis#033[00m
Dec  6 05:11:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.769 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d76af3c-ede9-445b-bea0-ba96a2eaeddd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 05:11:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.771 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[558d4aed-f3b8-4641-b62c-887275f749bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.772 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd namespace which is not needed anymore#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.780 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:43 np0005548915 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec  6 05:11:43 np0005548915 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Consumed 17.785s CPU time.
Dec  6 05:11:43 np0005548915 systemd-machined[216202]: Machine qemu-4-instance-00000006 terminated.
Dec  6 05:11:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.851 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:11:43 np0005548915 kernel: tapec2bc9a6-15: entered promiscuous mode
Dec  6 05:11:43 np0005548915 NetworkManager[48882]: <info>  [1765015903.9033] manager: (tapec2bc9a6-15): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.903 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:43 np0005548915 ovn_controller[152417]: 2025-12-06T10:11:43Z|00084|binding|INFO|Claiming lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for this chassis.
Dec  6 05:11:43 np0005548915 ovn_controller[152417]: 2025-12-06T10:11:43Z|00085|binding|INFO|ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b: Claiming fa:16:3e:64:9d:d4 10.100.0.14
Dec  6 05:11:43 np0005548915 systemd-udevd[271228]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:11:43 np0005548915 kernel: tapec2bc9a6-15 (unregistering): left promiscuous mode
Dec  6 05:11:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.915 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:9d:d4 10.100.0.14'], port_security=['fa:16:3e:64:9d:d4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '467f8e9a-e166-409e-920c-689fea4ea3f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04450372-2efd-4ce5-88c7-781d38bca802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=25f33b62-e011-4e1d-9dc2-7927e4f8e59b, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:11:43 np0005548915 ovn_controller[152417]: 2025-12-06T10:11:43Z|00086|binding|INFO|Setting lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b ovn-installed in OVS
Dec  6 05:11:43 np0005548915 ovn_controller[152417]: 2025-12-06T10:11:43Z|00087|binding|INFO|Setting lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b up in Southbound
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.926 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.930 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:43 np0005548915 ovn_controller[152417]: 2025-12-06T10:11:43Z|00088|binding|INFO|Releasing lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b from this chassis (sb_readonly=0)
Dec  6 05:11:43 np0005548915 ovn_controller[152417]: 2025-12-06T10:11:43Z|00089|binding|INFO|Setting lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b down in Southbound
Dec  6 05:11:43 np0005548915 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [NOTICE]   (267815) : haproxy version is 2.8.14-c23fe91
Dec  6 05:11:43 np0005548915 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [NOTICE]   (267815) : path to executable is /usr/sbin/haproxy
Dec  6 05:11:43 np0005548915 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [WARNING]  (267815) : Exiting Master process...
Dec  6 05:11:43 np0005548915 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [WARNING]  (267815) : Exiting Master process...
Dec  6 05:11:43 np0005548915 ovn_controller[152417]: 2025-12-06T10:11:43Z|00090|binding|INFO|Removing iface tapec2bc9a6-15 ovn-installed in OVS
Dec  6 05:11:43 np0005548915 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [ALERT]    (267815) : Current worker (267817) exited with code 143 (Terminated)
Dec  6 05:11:43 np0005548915 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [WARNING]  (267815) : All workers exited. Exiting... (0)
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.936 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.939 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:9d:d4 10.100.0.14'], port_security=['fa:16:3e:64:9d:d4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '467f8e9a-e166-409e-920c-689fea4ea3f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04450372-2efd-4ce5-88c7-781d38bca802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=25f33b62-e011-4e1d-9dc2-7927e4f8e59b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:11:43 np0005548915 systemd[1]: libpod-64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c.scope: Deactivated successfully.
Dec  6 05:11:43 np0005548915 podman[271262]: 2025-12-06 10:11:43.946133373 +0000 UTC m=+0.056472190 container died 64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.954 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.957 254824 INFO nova.virt.libvirt.driver [-] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Instance destroyed successfully.#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.957 254824 DEBUG nova.objects.instance [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.975 254824 DEBUG nova.virt.libvirt.vif [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:10:20Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.976 254824 DEBUG nova.network.os_vif_util [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.977 254824 DEBUG nova.network.os_vif_util [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:64:9d:d4,bridge_name='br-int',has_traffic_filtering=True,id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b,network=Network(4d76af3c-ede9-445b-bea0-ba96a2eaeddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bc9a6-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.978 254824 DEBUG os_vif [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:9d:d4,bridge_name='br-int',has_traffic_filtering=True,id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b,network=Network(4d76af3c-ede9-445b-bea0-ba96a2eaeddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bc9a6-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.980 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:43 np0005548915 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c-userdata-shm.mount: Deactivated successfully.
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.980 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec2bc9a6-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:11:43 np0005548915 systemd[1]: var-lib-containers-storage-overlay-b0c35a0a906865a4663842e5ed6b698da4d1040e57a2b60288990c137c9d3376-merged.mount: Deactivated successfully.
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.985 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.988 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:11:43 np0005548915 podman[271262]: 2025-12-06 10:11:43.991132546 +0000 UTC m=+0.101471363 container cleanup 64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 05:11:43 np0005548915 nova_compute[254819]: 2025-12-06 10:11:43.991 254824 INFO os_vif [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:9d:d4,bridge_name='br-int',has_traffic_filtering=True,id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b,network=Network(4d76af3c-ede9-445b-bea0-ba96a2eaeddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bc9a6-15')#033[00m
Dec  6 05:11:44 np0005548915 systemd[1]: libpod-conmon-64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c.scope: Deactivated successfully.
Dec  6 05:11:44 np0005548915 podman[271302]: 2025-12-06 10:11:44.072424228 +0000 UTC m=+0.054433106 container remove 64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 05:11:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.087 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca67610-5b10-42c7-aeb6-b352b159fbaa]: (4, ('Sat Dec  6 10:11:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd (64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c)\n64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c\nSat Dec  6 10:11:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd (64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c)\n64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.089 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[63c0082e-72f9-4441-9306-145423ddf235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.090 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d76af3c-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:11:44 np0005548915 kernel: tap4d76af3c-e0: left promiscuous mode
Dec  6 05:11:44 np0005548915 nova_compute[254819]: 2025-12-06 10:11:44.092 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:44 np0005548915 nova_compute[254819]: 2025-12-06 10:11:44.113 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.116 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[b872c5e4-332e-4697-8c94-e4fc807be9f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.130 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea3dbdc-98f1-4c81-b74f-9002ec2e8609]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.131 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab6844b-32d4-4e09-b2e0-212f1bed689a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.161 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[df22c8c2-9cce-41b2-9207-e9664a0adb9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419047, 'reachable_time': 15621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271328, 'error': None, 'target': 'ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.164 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 05:11:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.164 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[4435c0b6-253d-4794-907d-d8f0b626f421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.165 162267 INFO neutron.agent.ovn.metadata.agent [-] Port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b in datapath 4d76af3c-ede9-445b-bea0-ba96a2eaeddd unbound from our chassis#033[00m
Dec  6 05:11:44 np0005548915 systemd[1]: run-netns-ovnmeta\x2d4d76af3c\x2dede9\x2d445b\x2dbea0\x2dba96a2eaeddd.mount: Deactivated successfully.
Dec  6 05:11:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.166 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d76af3c-ede9-445b-bea0-ba96a2eaeddd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 05:11:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.167 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e4aeb864-2171-4369-8078-c22cfd32d552]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.168 162267 INFO neutron.agent.ovn.metadata.agent [-] Port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b in datapath 4d76af3c-ede9-445b-bea0-ba96a2eaeddd unbound from our chassis#033[00m
Dec  6 05:11:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.168 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d76af3c-ede9-445b-bea0-ba96a2eaeddd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 05:11:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.169 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9f7a8c08-cd30-4cd7-83a5-8237d834c15e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:11:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:11:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230595324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:11:44 np0005548915 nova_compute[254819]: 2025-12-06 10:11:44.233 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:11:44 np0005548915 nova_compute[254819]: 2025-12-06 10:11:44.243 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:11:44 np0005548915 nova_compute[254819]: 2025-12-06 10:11:44.264 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:11:44 np0005548915 nova_compute[254819]: 2025-12-06 10:11:44.267 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:11:44 np0005548915 nova_compute[254819]: 2025-12-06 10:11:44.268 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:44 np0005548915 nova_compute[254819]: 2025-12-06 10:11:44.427 254824 INFO nova.virt.libvirt.driver [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Deleting instance files /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6_del#033[00m
Dec  6 05:11:44 np0005548915 nova_compute[254819]: 2025-12-06 10:11:44.428 254824 INFO nova.virt.libvirt.driver [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Deletion of /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6_del complete#033[00m
Dec  6 05:11:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v919: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 3.0 KiB/s wr, 19 op/s
Dec  6 05:11:44 np0005548915 nova_compute[254819]: 2025-12-06 10:11:44.504 254824 INFO nova.compute.manager [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 05:11:44 np0005548915 nova_compute[254819]: 2025-12-06 10:11:44.505 254824 DEBUG oslo.service.loopingcall [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 05:11:44 np0005548915 nova_compute[254819]: 2025-12-06 10:11:44.505 254824 DEBUG nova.compute.manager [-] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 05:11:44 np0005548915 nova_compute[254819]: 2025-12-06 10:11:44.505 254824 DEBUG nova.network.neutron [-] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 05:11:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc002d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc002d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:11:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:45.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.270 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.271 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.272 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:11:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:11:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:45.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.455 254824 DEBUG nova.network.neutron [-] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.480 254824 INFO nova.compute.manager [-] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Took 0.98 seconds to deallocate network for instance.#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.488 254824 DEBUG nova.network.neutron [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated VIF entry in instance network info cache for port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.489 254824 DEBUG nova.network.neutron [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.540 254824 DEBUG oslo_concurrency.lockutils [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.551 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.552 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.565 254824 DEBUG nova.compute.manager [req-027256ff-1fdc-423d-8bdc-89230f21652f req-5485ea86-6d04-4cbe-8fa3-3612b7311a2a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-deleted-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.566 254824 INFO nova.compute.manager [req-027256ff-1fdc-423d-8bdc-89230f21652f req-5485ea86-6d04-4cbe-8fa3-3612b7311a2a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Neutron deleted interface ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.566 254824 DEBUG nova.network.neutron [req-027256ff-1fdc-423d-8bdc-89230f21652f req-5485ea86-6d04-4cbe-8fa3-3612b7311a2a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.596 254824 DEBUG nova.compute.manager [req-027256ff-1fdc-423d-8bdc-89230f21652f req-5485ea86-6d04-4cbe-8fa3-3612b7311a2a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Detach interface failed, port_id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b, reason: Instance 467f8e9a-e166-409e-920c-689fea4ea3f6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.609 254824 DEBUG oslo_concurrency.processutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.696 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-unplugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.697 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.698 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.698 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.698 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-unplugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.699 254824 WARNING nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-unplugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for instance with vm_state deleted and task_state None.#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.699 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.700 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.700 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.700 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.701 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.701 254824 WARNING nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for instance with vm_state deleted and task_state None.#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.701 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.702 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.702 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.702 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.703 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.703 254824 WARNING nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for instance with vm_state deleted and task_state None.#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.704 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.704 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.704 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.705 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.705 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.705 254824 WARNING nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for instance with vm_state deleted and task_state None.#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.706 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-unplugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.706 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.706 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.707 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.707 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-unplugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.708 254824 WARNING nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-unplugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for instance with vm_state deleted and task_state None.#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:11:45 np0005548915 nova_compute[254819]: 2025-12-06 10:11:45.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:11:46 np0005548915 nova_compute[254819]: 2025-12-06 10:11:46.093 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:11:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501168822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:11:46 np0005548915 nova_compute[254819]: 2025-12-06 10:11:46.156 254824 DEBUG oslo_concurrency.processutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:11:46 np0005548915 nova_compute[254819]: 2025-12-06 10:11:46.165 254824 DEBUG nova.compute.provider_tree [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:11:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  6 05:11:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3014585911' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 05:11:46 np0005548915 nova_compute[254819]: 2025-12-06 10:11:46.185 254824 DEBUG nova.scheduler.client.report [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:11:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  6 05:11:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3014585911' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 05:11:46 np0005548915 nova_compute[254819]: 2025-12-06 10:11:46.226 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:46 np0005548915 nova_compute[254819]: 2025-12-06 10:11:46.261 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:11:46 np0005548915 nova_compute[254819]: 2025-12-06 10:11:46.262 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:11:46 np0005548915 nova_compute[254819]: 2025-12-06 10:11:46.262 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 05:11:46 np0005548915 nova_compute[254819]: 2025-12-06 10:11:46.262 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:11:46 np0005548915 nova_compute[254819]: 2025-12-06 10:11:46.282 254824 INFO nova.scheduler.client.report [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 467f8e9a-e166-409e-920c-689fea4ea3f6#033[00m
Dec  6 05:11:46 np0005548915 nova_compute[254819]: 2025-12-06 10:11:46.373 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v920: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.6 KiB/s wr, 16 op/s
Dec  6 05:11:46 np0005548915 nova_compute[254819]: 2025-12-06 10:11:46.469 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 05:11:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:47 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:47 np0005548915 nova_compute[254819]: 2025-12-06 10:11:47.610 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:11:47 np0005548915 nova_compute[254819]: 2025-12-06 10:11:47.627 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:11:47 np0005548915 nova_compute[254819]: 2025-12-06 10:11:47.628 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 05:11:47 np0005548915 nova_compute[254819]: 2025-12-06 10:11:47.628 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:11:47 np0005548915 nova_compute[254819]: 2025-12-06 10:11:47.628 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:11:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e65a15d0 =====
Dec  6 05:11:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.003000079s ======
Dec  6 05:11:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:47.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Dec  6 05:11:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:47.636Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:11:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e65a15d0 op status=0 http_status=200 latency=0.004000106s ======
Dec  6 05:11:47 np0005548915 radosgw[94308]: beast: 0x7f53e65a15d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:47.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000106s
Dec  6 05:11:47 np0005548915 podman[271357]: 2025-12-06 10:11:47.706651817 +0000 UTC m=+0.048661481 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 05:11:47 np0005548915 nova_compute[254819]: 2025-12-06 10:11:47.847 254824 DEBUG nova.compute.manager [req-ef2c6c0e-50fb-465a-bef3-0b72304ac37c req-39faf58d-07b8-4396-b68c-f44f761e8bab d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:11:47 np0005548915 nova_compute[254819]: 2025-12-06 10:11:47.847 254824 DEBUG oslo_concurrency.lockutils [req-ef2c6c0e-50fb-465a-bef3-0b72304ac37c req-39faf58d-07b8-4396-b68c-f44f761e8bab d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:47 np0005548915 nova_compute[254819]: 2025-12-06 10:11:47.847 254824 DEBUG oslo_concurrency.lockutils [req-ef2c6c0e-50fb-465a-bef3-0b72304ac37c req-39faf58d-07b8-4396-b68c-f44f761e8bab d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:47 np0005548915 nova_compute[254819]: 2025-12-06 10:11:47.847 254824 DEBUG oslo_concurrency.lockutils [req-ef2c6c0e-50fb-465a-bef3-0b72304ac37c req-39faf58d-07b8-4396-b68c-f44f761e8bab d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:47 np0005548915 nova_compute[254819]: 2025-12-06 10:11:47.848 254824 DEBUG nova.compute.manager [req-ef2c6c0e-50fb-465a-bef3-0b72304ac37c req-39faf58d-07b8-4396-b68c-f44f761e8bab d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:11:47 np0005548915 nova_compute[254819]: 2025-12-06 10:11:47.848 254824 WARNING nova.compute.manager [req-ef2c6c0e-50fb-465a-bef3-0b72304ac37c req-39faf58d-07b8-4396-b68c-f44f761e8bab d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for instance with vm_state deleted and task_state None.#033[00m
Dec  6 05:11:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v921: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.7 KiB/s wr, 44 op/s
Dec  6 05:11:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:48 np0005548915 nova_compute[254819]: 2025-12-06 10:11:48.656 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:11:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:48 np0005548915 nova_compute[254819]: 2025-12-06 10:11:48.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:11:48 np0005548915 nova_compute[254819]: 2025-12-06 10:11:48.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:11:48 np0005548915 nova_compute[254819]: 2025-12-06 10:11:48.985 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:49.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:11:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:49.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e65a15d0 =====
Dec  6 05:11:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e65a15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:49 np0005548915 radosgw[94308]: beast: 0x7f53e65a15d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:49.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:11:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v922: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  6 05:11:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:50] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec  6 05:11:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:50] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec  6 05:11:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:51 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:51 np0005548915 nova_compute[254819]: 2025-12-06 10:11:51.094 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:51 np0005548915 nova_compute[254819]: 2025-12-06 10:11:51.272 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:51 np0005548915 nova_compute[254819]: 2025-12-06 10:11:51.377 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e65a15d0 =====
Dec  6 05:11:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:11:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:51.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:11:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e65a15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:51 np0005548915 radosgw[94308]: beast: 0x7f53e65a15d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:51.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v923: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  6 05:11:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:53 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:11:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:53.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:11:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:11:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:53.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:11:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:11:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:11:53 np0005548915 nova_compute[254819]: 2025-12-06 10:11:53.988 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:11:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:11:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:11:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:11:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:11:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:11:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:54.243 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:11:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:54.243 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:11:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:11:54.243 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:11:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v924: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  6 05:11:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:11:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:55 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:55.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:55.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:56 np0005548915 nova_compute[254819]: 2025-12-06 10:11:56.127 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v925: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  6 05:11:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:57 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:57.637Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:11:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:57.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:57.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v926: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  6 05:11:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:58 np0005548915 nova_compute[254819]: 2025-12-06 10:11:58.944 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765015903.9427285, 467f8e9a-e166-409e-920c-689fea4ea3f6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:11:58 np0005548915 nova_compute[254819]: 2025-12-06 10:11:58.945 254824 INFO nova.compute.manager [-] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] VM Stopped (Lifecycle Event)#033[00m
Dec  6 05:11:58 np0005548915 nova_compute[254819]: 2025-12-06 10:11:58.971 254824 DEBUG nova.compute.manager [None req-40ed7fed-b130-48b0-af51-6aaa006778d1 - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:11:58 np0005548915 nova_compute[254819]: 2025-12-06 10:11:58.989 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:11:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:59.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:11:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:59 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:11:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:11:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:59.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:11:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:11:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:11:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:59.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:11:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:12:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v927: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:12:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec  6 05:12:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec  6 05:12:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Dec  6 05:12:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Dec  6 05:12:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Dec  6 05:12:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Dec  6 05:12:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Dec  6 05:12:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Dec  6 05:12:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  6 05:12:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  6 05:12:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:01 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:01 np0005548915 nova_compute[254819]: 2025-12-06 10:12:01.129 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:01.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:01.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v928: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:12:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:03 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:03.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:03.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:03 np0005548915 nova_compute[254819]: 2025-12-06 10:12:03.991 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v929: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Dec  6 05:12:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:12:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:05 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:12:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:05.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:12:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:05.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:06 np0005548915 nova_compute[254819]: 2025-12-06 10:12:06.132 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v930: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Dec  6 05:12:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101207 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:12:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:07 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:07 np0005548915 podman[271429]: 2025-12-06 10:12:07.437125669 +0000 UTC m=+0.062777359 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 05:12:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:07.639Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:12:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:12:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:07.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:12:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:07.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:07 np0005548915 nova_compute[254819]: 2025-12-06 10:12:07.865 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:07 np0005548915 nova_compute[254819]: 2025-12-06 10:12:07.865 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:07 np0005548915 nova_compute[254819]: 2025-12-06 10:12:07.888 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 05:12:07 np0005548915 nova_compute[254819]: 2025-12-06 10:12:07.958 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:07 np0005548915 nova_compute[254819]: 2025-12-06 10:12:07.959 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:07 np0005548915 nova_compute[254819]: 2025-12-06 10:12:07.966 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 05:12:07 np0005548915 nova_compute[254819]: 2025-12-06 10:12:07.967 254824 INFO nova.compute.claims [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.074 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v931: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Dec  6 05:12:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:12:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2628881591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.531 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.538 254824 DEBUG nova.compute.provider_tree [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.557 254824 DEBUG nova.scheduler.client.report [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.588 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.589 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 05:12:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.667 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.668 254824 DEBUG nova.network.neutron [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.686 254824 INFO nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 05:12:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.704 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.800 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.802 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.803 254824 INFO nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Creating image(s)#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.835 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.866 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.891 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.895 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.961 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.962 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.963 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.964 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:12:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.990 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:12:08 np0005548915 nova_compute[254819]: 2025-12-06 10:12:08.994 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:09.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:12:09 np0005548915 nova_compute[254819]: 2025-12-06 10:12:09.020 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:09 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:09 np0005548915 nova_compute[254819]: 2025-12-06 10:12:09.272 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:09 np0005548915 nova_compute[254819]: 2025-12-06 10:12:09.367 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 05:12:09 np0005548915 nova_compute[254819]: 2025-12-06 10:12:09.494 254824 DEBUG nova.objects.instance [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:12:09 np0005548915 nova_compute[254819]: 2025-12-06 10:12:09.517 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 05:12:09 np0005548915 nova_compute[254819]: 2025-12-06 10:12:09.518 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Ensure instance console log exists: /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 05:12:09 np0005548915 nova_compute[254819]: 2025-12-06 10:12:09.519 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:09 np0005548915 nova_compute[254819]: 2025-12-06 10:12:09.519 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:09 np0005548915 nova_compute[254819]: 2025-12-06 10:12:09.520 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:09 np0005548915 nova_compute[254819]: 2025-12-06 10:12:09.529 254824 DEBUG nova.policy [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 05:12:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:09.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:09.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:12:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v932: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Dec  6 05:12:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:12:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:12:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:11 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:11 np0005548915 nova_compute[254819]: 2025-12-06 10:12:11.135 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:11.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:11.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:11 np0005548915 nova_compute[254819]: 2025-12-06 10:12:11.703 254824 DEBUG nova.network.neutron [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Successfully updated port: 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 05:12:11 np0005548915 nova_compute[254819]: 2025-12-06 10:12:11.719 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:12:11 np0005548915 nova_compute[254819]: 2025-12-06 10:12:11.719 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:12:11 np0005548915 nova_compute[254819]: 2025-12-06 10:12:11.720 254824 DEBUG nova.network.neutron [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 05:12:11 np0005548915 nova_compute[254819]: 2025-12-06 10:12:11.824 254824 DEBUG nova.compute.manager [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received event network-changed-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:12:11 np0005548915 nova_compute[254819]: 2025-12-06 10:12:11.825 254824 DEBUG nova.compute.manager [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Refreshing instance network info cache due to event network-changed-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:12:11 np0005548915 nova_compute[254819]: 2025-12-06 10:12:11.825 254824 DEBUG oslo_concurrency.lockutils [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:12:12 np0005548915 nova_compute[254819]: 2025-12-06 10:12:12.294 254824 DEBUG nova.network.neutron [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 05:12:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v933: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Dec  6 05:12:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:13 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:13 np0005548915 podman[271668]: 2025-12-06 10:12:13.337432292 +0000 UTC m=+0.122420802 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.545 254824 DEBUG nova.network.neutron [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Updating instance_info_cache with network_info: [{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.567 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.567 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Instance network_info: |[{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.567 254824 DEBUG oslo_concurrency.lockutils [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.567 254824 DEBUG nova.network.neutron [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Refreshing network info cache for port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.570 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Start _get_guest_xml network_info=[{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.575 254824 WARNING nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.588 254824 DEBUG nova.virt.libvirt.host [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.589 254824 DEBUG nova.virt.libvirt.host [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.594 254824 DEBUG nova.virt.libvirt.host [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.595 254824 DEBUG nova.virt.libvirt.host [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.595 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.595 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.596 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.596 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.596 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.597 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.597 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.597 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.598 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.598 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.598 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.599 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 05:12:13 np0005548915 nova_compute[254819]: 2025-12-06 10:12:13.603 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:13.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:13.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:12:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1421545061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.022 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.032 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.060 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.064 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v934: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 1.8 MiB/s wr, 183 op/s
Dec  6 05:12:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:12:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1184900673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.583 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.585 254824 DEBUG nova.virt.libvirt.vif [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:12:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-284526286',display_name='tempest-TestNetworkBasicOps-server-284526286',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-284526286',id=8,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5WKiUV8xkMkAsnSbmzedlPzfsh0aXQ19j5QoS/ZDmv+Vks7yaRYH6rFdpbJ+HzL9PhlMkojs6PG37wLmd0XymAGnK31KjajjkwaxDm0frZ4gN7dvsIumy7dBgoLu6Aiw==',key_name='tempest-TestNetworkBasicOps-1751669676',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-4hshqkm6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:12:08Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.585 254824 DEBUG nova.network.os_vif_util [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.586 254824 DEBUG nova.network.os_vif_util [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.588 254824 DEBUG nova.objects.instance [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:12:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.652 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] End _get_guest_xml xml=<domain type="kvm">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  <uuid>38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971</uuid>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  <name>instance-00000008</name>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  <memory>131072</memory>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  <vcpu>1</vcpu>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <nova:name>tempest-TestNetworkBasicOps-server-284526286</nova:name>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <nova:creationTime>2025-12-06 10:12:13</nova:creationTime>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <nova:flavor name="m1.nano">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <nova:memory>128</nova:memory>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <nova:disk>1</nova:disk>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <nova:swap>0</nova:swap>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <nova:vcpus>1</nova:vcpus>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      </nova:flavor>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <nova:owner>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      </nova:owner>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <nova:ports>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <nova:port uuid="4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        </nova:port>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      </nova:ports>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    </nova:instance>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  <sysinfo type="smbios">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <entry name="manufacturer">RDO</entry>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <entry name="product">OpenStack Compute</entry>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <entry name="serial">38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971</entry>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <entry name="uuid">38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971</entry>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <entry name="family">Virtual Machine</entry>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <boot dev="hd"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <smbios mode="sysinfo"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <vmcoreinfo/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  <clock offset="utc">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <timer name="hpet" present="no"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  <cpu mode="host-model" match="exact">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <disk type="network" device="disk">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <target dev="vda" bus="virtio"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <disk type="network" device="cdrom">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk.config">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <target dev="sda" bus="sata"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <interface type="ethernet">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <mac address="fa:16:3e:6f:25:fa"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <mtu size="1442"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <target dev="tap4c8ce68f-8a"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <serial type="pty">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <log file="/var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/console.log" append="off"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <input type="tablet" bus="usb"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <rng model="virtio">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <backend model="random">/dev/urandom</backend>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <controller type="usb" index="0"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    <memballoon model="virtio">
Dec  6 05:12:14 np0005548915 nova_compute[254819]:      <stats period="10"/>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:12:14 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:12:14 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:12:14 np0005548915 nova_compute[254819]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.653 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Preparing to wait for external event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.654 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.654 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.654 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.655 254824 DEBUG nova.virt.libvirt.vif [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:12:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-284526286',display_name='tempest-TestNetworkBasicOps-server-284526286',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-284526286',id=8,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5WKiUV8xkMkAsnSbmzedlPzfsh0aXQ19j5QoS/ZDmv+Vks7yaRYH6rFdpbJ+HzL9PhlMkojs6PG37wLmd0XymAGnK31KjajjkwaxDm0frZ4gN7dvsIumy7dBgoLu6Aiw==',key_name='tempest-TestNetworkBasicOps-1751669676',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-4hshqkm6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:12:08Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.655 254824 DEBUG nova.network.os_vif_util [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.656 254824 DEBUG nova.network.os_vif_util [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.656 254824 DEBUG os_vif [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.657 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.658 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.658 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.660 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.661 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c8ce68f-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.661 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4c8ce68f-8a, col_values=(('external_ids', {'iface-id': '4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:25:fa', 'vm-uuid': '38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.663 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:14 np0005548915 NetworkManager[48882]: <info>  [1765015934.6644] manager: (tap4c8ce68f-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.666 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.673 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.675 254824 INFO os_vif [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a')#033[00m
Dec  6 05:12:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.730 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.730 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.730 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:6f:25:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.731 254824 INFO nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Using config drive#033[00m
Dec  6 05:12:14 np0005548915 nova_compute[254819]: 2025-12-06 10:12:14.764 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:12:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:12:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:15 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608002030 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.125 254824 INFO nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Creating config drive at /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/disk.config#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.134 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp99hdhhok execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.270 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp99hdhhok" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.317 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.323 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/disk.config 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.342 254824 DEBUG nova.network.neutron [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Updated VIF entry in instance network info cache for port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.343 254824 DEBUG nova.network.neutron [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Updating instance_info_cache with network_info: [{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.367 254824 DEBUG oslo_concurrency.lockutils [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.489 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/disk.config 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.490 254824 INFO nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Deleting local config drive /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/disk.config because it was imported into RBD.#033[00m
Dec  6 05:12:15 np0005548915 kernel: tap4c8ce68f-8a: entered promiscuous mode
Dec  6 05:12:15 np0005548915 NetworkManager[48882]: <info>  [1765015935.5435] manager: (tap4c8ce68f-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.544 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:15 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:15Z|00091|binding|INFO|Claiming lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for this chassis.
Dec  6 05:12:15 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:15Z|00092|binding|INFO|4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e: Claiming fa:16:3e:6f:25:fa 10.100.0.9
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.552 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.556 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.563 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:25:fa 10.100.0.9'], port_security=['fa:16:3e:6f:25:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1e7cc18e-31f3-4bdb-821d-1683a210c530', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=093e5b40-935f-42c8-a85f-385c1c7048be, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.564 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e in datapath c2ce21d9-e711-470f-89f6-0db58ded70b9 bound to our chassis#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.565 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c2ce21d9-e711-470f-89f6-0db58ded70b9#033[00m
Dec  6 05:12:15 np0005548915 systemd-udevd[271832]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.580 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[66e4b8b3-c4b5-4f04-857d-2e507f53e082]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.581 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc2ce21d9-e1 in ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 05:12:15 np0005548915 systemd-machined[216202]: New machine qemu-5-instance-00000008.
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.584 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc2ce21d9-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.584 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[2266fc6d-ec4e-4c4a-a2be-1c19054f4676]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.585 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[1987f26b-56df-4499-b2ca-0548f19f513e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 NetworkManager[48882]: <info>  [1765015935.5890] device (tap4c8ce68f-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 05:12:15 np0005548915 NetworkManager[48882]: <info>  [1765015935.5902] device (tap4c8ce68f-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.598 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[9e153dca-6d72-4286-8cf1-889391f90fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 systemd[1]: Started Virtual Machine qemu-5-instance-00000008.
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.630 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[4f94dbe1-c335-4326-9fa7-418f39ea4cdb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.641 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:15 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:15Z|00093|binding|INFO|Setting lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e ovn-installed in OVS
Dec  6 05:12:15 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:15Z|00094|binding|INFO|Setting lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e up in Southbound
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.647 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:15.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.667 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe7e475-d870-4391-847c-b37e7bbf348b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.673 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[77e04807-2608-46d9-80e4-d015d27e2974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 systemd-udevd[271835]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:12:15 np0005548915 NetworkManager[48882]: <info>  [1765015935.6755] manager: (tapc2ce21d9-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Dec  6 05:12:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:15.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.706 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb7682a-cc8a-4892-a74f-b11382759a6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.709 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[39088c1e-b4ec-4144-910b-6818ef8fb60a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 NetworkManager[48882]: <info>  [1765015935.7287] device (tapc2ce21d9-e0): carrier: link connected
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.734 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[13b343fc-aef4-4916-8bff-2d1147986895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.752 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8d05b2a7-d7fa-43a2-8c6c-efde55e15fd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2ce21d9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:58:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 430621, 'reachable_time': 21540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271867, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.766 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0237e422-6e19-4289-a5a2-a6dd70db8272]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaf:5864'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 430621, 'tstamp': 430621}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271868, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.786 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[24750187-e1f5-481e-a03f-867a53145d86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2ce21d9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:58:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 430621, 'reachable_time': 21540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271869, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.815 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[2a60eae7-c67d-428a-b226-1d9d184e03b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.881 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[f11432ba-74d5-400a-bd20-196274539ee6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.883 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2ce21d9-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.883 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.883 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2ce21d9-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:15 np0005548915 NetworkManager[48882]: <info>  [1765015935.8868] manager: (tapc2ce21d9-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Dec  6 05:12:15 np0005548915 kernel: tapc2ce21d9-e0: entered promiscuous mode
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.890 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc2ce21d9-e0, col_values=(('external_ids', {'iface-id': '52d33d15-d96f-4c26-a63e-0415fca27e6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:15 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:15Z|00095|binding|INFO|Releasing lport 52d33d15-d96f-4c26-a63e-0415fca27e6a from this chassis (sb_readonly=0)
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.892 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.914 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.915 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c2ce21d9-e711-470f-89f6-0db58ded70b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c2ce21d9-e711-470f-89f6-0db58ded70b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.916 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8f05fd07-d5c7-4fb1-b5ea-4e2fdfdf43d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.917 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: global
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    log         /dev/log local0 debug
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    log-tag     haproxy-metadata-proxy-c2ce21d9-e711-470f-89f6-0db58ded70b9
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    user        root
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    group       root
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    maxconn     1024
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    pidfile     /var/lib/neutron/external/pids/c2ce21d9-e711-470f-89f6-0db58ded70b9.pid.haproxy
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    daemon
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: defaults
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    log global
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    mode http
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    option httplog
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    option dontlognull
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    option http-server-close
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    option forwardfor
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    retries                 3
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    timeout http-request    30s
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    timeout connect         30s
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    timeout client          32s
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    timeout server          32s
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    timeout http-keep-alive 30s
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: listen listener
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    bind 169.254.169.254:80
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]:    http-request add-header X-OVN-Network-ID c2ce21d9-e711-470f-89f6-0db58ded70b9
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 05:12:15 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.917 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'env', 'PROCESS_TAG=haproxy-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c2ce21d9-e711-470f-89f6-0db58ded70b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.953 254824 DEBUG nova.compute.manager [req-226530a5-8c8e-474c-97fc-3f170d512b65 req-e6517a8d-35ba-4281-89f0-e8f812fa2956 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.953 254824 DEBUG oslo_concurrency.lockutils [req-226530a5-8c8e-474c-97fc-3f170d512b65 req-e6517a8d-35ba-4281-89f0-e8f812fa2956 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.953 254824 DEBUG oslo_concurrency.lockutils [req-226530a5-8c8e-474c-97fc-3f170d512b65 req-e6517a8d-35ba-4281-89f0-e8f812fa2956 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.954 254824 DEBUG oslo_concurrency.lockutils [req-226530a5-8c8e-474c-97fc-3f170d512b65 req-e6517a8d-35ba-4281-89f0-e8f812fa2956 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:15 np0005548915 nova_compute[254819]: 2025-12-06 10:12:15.954 254824 DEBUG nova.compute.manager [req-226530a5-8c8e-474c-97fc-3f170d512b65 req-e6517a8d-35ba-4281-89f0-e8f812fa2956 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Processing event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.137 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.221 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015936.2212744, 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.222 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] VM Started (Lifecycle Event)#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.224 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.227 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.230 254824 INFO nova.virt.libvirt.driver [-] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Instance spawned successfully.#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.230 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.258 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.262 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.262 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.263 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.263 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.263 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.264 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.267 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.308 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.308 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015936.2214634, 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.309 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] VM Paused (Lifecycle Event)#033[00m
Dec  6 05:12:16 np0005548915 podman[271943]: 2025-12-06 10:12:16.317760144 +0000 UTC m=+0.056574812 container create 97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.346 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.351 254824 INFO nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Took 7.55 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.351 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.355 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015936.2263675, 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.356 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] VM Resumed (Lifecycle Event)#033[00m
Dec  6 05:12:16 np0005548915 systemd[1]: Started libpod-conmon-97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24.scope.
Dec  6 05:12:16 np0005548915 podman[271943]: 2025-12-06 10:12:16.286990812 +0000 UTC m=+0.025805520 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  6 05:12:16 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:12:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d1a25401d5ac4822de4fb50bc3620447da04f31525bd103aac8567c3654c9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 05:12:16 np0005548915 podman[271943]: 2025-12-06 10:12:16.416211935 +0000 UTC m=+0.155026623 container init 97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.423 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:12:16 np0005548915 podman[271943]: 2025-12-06 10:12:16.42346043 +0000 UTC m=+0.162275098 container start 97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.427 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:12:16 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [NOTICE]   (271962) : New worker (271964) forked
Dec  6 05:12:16 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [NOTICE]   (271962) : Loading success.
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.456 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:12:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v935: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:12:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.693 254824 INFO nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Took 8.76 seconds to build instance.#033[00m
Dec  6 05:12:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:16 np0005548915 nova_compute[254819]: 2025-12-06 10:12:16.725 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:17 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:17.641Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:12:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:17.642Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:12:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:17.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:17.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:18 np0005548915 nova_compute[254819]: 2025-12-06 10:12:18.055 254824 DEBUG nova.compute.manager [req-e5cc6175-653e-4dcc-a8f5-072b895264c4 req-9ed28b3c-e90c-409e-89cb-56788b408daa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:12:18 np0005548915 nova_compute[254819]: 2025-12-06 10:12:18.056 254824 DEBUG oslo_concurrency.lockutils [req-e5cc6175-653e-4dcc-a8f5-072b895264c4 req-9ed28b3c-e90c-409e-89cb-56788b408daa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:18 np0005548915 nova_compute[254819]: 2025-12-06 10:12:18.056 254824 DEBUG oslo_concurrency.lockutils [req-e5cc6175-653e-4dcc-a8f5-072b895264c4 req-9ed28b3c-e90c-409e-89cb-56788b408daa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:18 np0005548915 nova_compute[254819]: 2025-12-06 10:12:18.057 254824 DEBUG oslo_concurrency.lockutils [req-e5cc6175-653e-4dcc-a8f5-072b895264c4 req-9ed28b3c-e90c-409e-89cb-56788b408daa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:18 np0005548915 nova_compute[254819]: 2025-12-06 10:12:18.057 254824 DEBUG nova.compute.manager [req-e5cc6175-653e-4dcc-a8f5-072b895264c4 req-9ed28b3c-e90c-409e-89cb-56788b408daa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] No waiting events found dispatching network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:12:18 np0005548915 nova_compute[254819]: 2025-12-06 10:12:18.058 254824 WARNING nova.compute.manager [req-e5cc6175-653e-4dcc-a8f5-072b895264c4 req-9ed28b3c-e90c-409e-89cb-56788b408daa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received unexpected event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for instance with vm_state active and task_state None.#033[00m
Dec  6 05:12:18 np0005548915 podman[271975]: 2025-12-06 10:12:18.428531428 +0000 UTC m=+0.056596514 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  6 05:12:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v936: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec  6 05:12:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:19.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:12:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:19 np0005548915 nova_compute[254819]: 2025-12-06 10:12:19.664 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:19.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:12:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:12:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:19.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:12:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v937: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec  6 05:12:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:12:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:12:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:21 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:21 np0005548915 nova_compute[254819]: 2025-12-06 10:12:21.139 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:21 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:21Z|00096|binding|INFO|Releasing lport 52d33d15-d96f-4c26-a63e-0415fca27e6a from this chassis (sb_readonly=0)
Dec  6 05:12:21 np0005548915 nova_compute[254819]: 2025-12-06 10:12:21.371 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:21 np0005548915 NetworkManager[48882]: <info>  [1765015941.3755] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Dec  6 05:12:21 np0005548915 NetworkManager[48882]: <info>  [1765015941.3765] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Dec  6 05:12:21 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:21Z|00097|binding|INFO|Releasing lport 52d33d15-d96f-4c26-a63e-0415fca27e6a from this chassis (sb_readonly=0)
Dec  6 05:12:21 np0005548915 nova_compute[254819]: 2025-12-06 10:12:21.438 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:21 np0005548915 nova_compute[254819]: 2025-12-06 10:12:21.447 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:21.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:21.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v938: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec  6 05:12:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 05:12:22 np0005548915 nova_compute[254819]: 2025-12-06 10:12:22.703 254824 DEBUG nova.compute.manager [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received event network-changed-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:12:22 np0005548915 nova_compute[254819]: 2025-12-06 10:12:22.704 254824 DEBUG nova.compute.manager [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Refreshing instance network info cache due to event network-changed-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:12:22 np0005548915 nova_compute[254819]: 2025-12-06 10:12:22.704 254824 DEBUG oslo_concurrency.lockutils [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:12:22 np0005548915 nova_compute[254819]: 2025-12-06 10:12:22.704 254824 DEBUG oslo_concurrency.lockutils [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:12:22 np0005548915 nova_compute[254819]: 2025-12-06 10:12:22.704 254824 DEBUG nova.network.neutron [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Refreshing network info cache for port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:12:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:22 np0005548915 nova_compute[254819]: 2025-12-06 10:12:22.918 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:22 np0005548915 nova_compute[254819]: 2025-12-06 10:12:22.918 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:22 np0005548915 nova_compute[254819]: 2025-12-06 10:12:22.919 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:22 np0005548915 nova_compute[254819]: 2025-12-06 10:12:22.919 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:22 np0005548915 nova_compute[254819]: 2025-12-06 10:12:22.919 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:22 np0005548915 nova_compute[254819]: 2025-12-06 10:12:22.920 254824 INFO nova.compute.manager [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Terminating instance#033[00m
Dec  6 05:12:22 np0005548915 nova_compute[254819]: 2025-12-06 10:12:22.922 254824 DEBUG nova.compute.manager [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 05:12:22 np0005548915 kernel: tap4c8ce68f-8a (unregistering): left promiscuous mode
Dec  6 05:12:22 np0005548915 NetworkManager[48882]: <info>  [1765015942.9617] device (tap4c8ce68f-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 05:12:22 np0005548915 nova_compute[254819]: 2025-12-06 10:12:22.975 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:22 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:22Z|00098|binding|INFO|Releasing lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e from this chassis (sb_readonly=0)
Dec  6 05:12:22 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:22Z|00099|binding|INFO|Setting lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e down in Southbound
Dec  6 05:12:22 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:22Z|00100|binding|INFO|Removing iface tap4c8ce68f-8a ovn-installed in OVS
Dec  6 05:12:22 np0005548915 nova_compute[254819]: 2025-12-06 10:12:22.978 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:22 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:22.987 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:25:fa 10.100.0.9'], port_security=['fa:16:3e:6f:25:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1e7cc18e-31f3-4bdb-821d-1683a210c530', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=093e5b40-935f-42c8-a85f-385c1c7048be, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:12:22 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:22.988 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e in datapath c2ce21d9-e711-470f-89f6-0db58ded70b9 unbound from our chassis#033[00m
Dec  6 05:12:22 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:22.990 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c2ce21d9-e711-470f-89f6-0db58ded70b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 05:12:22 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:22.991 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf214a5-8e2f-49a7-83c0-d03e22d810f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:22 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:22.991 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 namespace which is not needed anymore#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.004 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:23 np0005548915 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec  6 05:12:23 np0005548915 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Consumed 7.556s CPU time.
Dec  6 05:12:23 np0005548915 systemd-machined[216202]: Machine qemu-5-instance-00000008 terminated.
Dec  6 05:12:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:23 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:23 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [NOTICE]   (271962) : haproxy version is 2.8.14-c23fe91
Dec  6 05:12:23 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [NOTICE]   (271962) : path to executable is /usr/sbin/haproxy
Dec  6 05:12:23 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [WARNING]  (271962) : Exiting Master process...
Dec  6 05:12:23 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [WARNING]  (271962) : Exiting Master process...
Dec  6 05:12:23 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [ALERT]    (271962) : Current worker (271964) exited with code 143 (Terminated)
Dec  6 05:12:23 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [WARNING]  (271962) : All workers exited. Exiting... (0)
Dec  6 05:12:23 np0005548915 systemd[1]: libpod-97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24.scope: Deactivated successfully.
Dec  6 05:12:23 np0005548915 podman[272024]: 2025-12-06 10:12:23.130858233 +0000 UTC m=+0.044018557 container died 97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.155 254824 INFO nova.virt.libvirt.driver [-] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Instance destroyed successfully.#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.156 254824 DEBUG nova.objects.instance [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:12:23 np0005548915 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24-userdata-shm.mount: Deactivated successfully.
Dec  6 05:12:23 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d6d1a25401d5ac4822de4fb50bc3620447da04f31525bd103aac8567c3654c9e-merged.mount: Deactivated successfully.
Dec  6 05:12:23 np0005548915 podman[272024]: 2025-12-06 10:12:23.175037324 +0000 UTC m=+0.088197658 container cleanup 97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.182 254824 DEBUG nova.virt.libvirt.vif [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:12:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-284526286',display_name='tempest-TestNetworkBasicOps-server-284526286',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-284526286',id=8,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5WKiUV8xkMkAsnSbmzedlPzfsh0aXQ19j5QoS/ZDmv+Vks7yaRYH6rFdpbJ+HzL9PhlMkojs6PG37wLmd0XymAGnK31KjajjkwaxDm0frZ4gN7dvsIumy7dBgoLu6Aiw==',key_name='tempest-TestNetworkBasicOps-1751669676',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-4hshqkm6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:12:16Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.182 254824 DEBUG nova.network.os_vif_util [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.183 254824 DEBUG nova.network.os_vif_util [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.183 254824 DEBUG os_vif [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.185 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.185 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c8ce68f-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.187 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.188 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.191 254824 INFO os_vif [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a')#033[00m
Dec  6 05:12:23 np0005548915 systemd[1]: libpod-conmon-97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24.scope: Deactivated successfully.
Dec  6 05:12:23 np0005548915 podman[272063]: 2025-12-06 10:12:23.244372217 +0000 UTC m=+0.042857276 container remove 97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  6 05:12:23 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.251 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[62e5d06d-739a-4886-a56d-9c38e551312d]: (4, ('Sat Dec  6 10:12:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 (97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24)\n97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24\nSat Dec  6 10:12:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 (97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24)\n97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:23 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.253 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[05c73cea-9d36-4475-b186-bb73f6f1b33d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:23 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.254 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2ce21d9-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:23 np0005548915 kernel: tapc2ce21d9-e0: left promiscuous mode
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.256 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.268 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:23 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.272 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[b87d7559-5820-49f5-8dfc-d1473cba12d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:23 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.292 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[cddc806a-ec8d-41dd-b2fe-fe4de853bf4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:23 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.295 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[3b97e147-07e4-4f85-b9c1-2b1f88844b3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:23 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.308 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[156285ea-ecbd-46c0-ac8d-51b1eaec11b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 430614, 'reachable_time': 25478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272096, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:23 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.311 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 05:12:23 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.311 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[d053615f-5fdb-4089-9f15-4df63481fb7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:23 np0005548915 systemd[1]: run-netns-ovnmeta\x2dc2ce21d9\x2de711\x2d470f\x2d89f6\x2d0db58ded70b9.mount: Deactivated successfully.
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.549 254824 INFO nova.virt.libvirt.driver [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Deleting instance files /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_del#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.551 254824 INFO nova.virt.libvirt.driver [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Deletion of /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_del complete#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.610 254824 INFO nova.compute.manager [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.611 254824 DEBUG oslo.service.loopingcall [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.611 254824 DEBUG nova.compute.manager [-] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 05:12:23 np0005548915 nova_compute[254819]: 2025-12-06 10:12:23.611 254824 DEBUG nova.network.neutron [-] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 05:12:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:23.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:23.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:12:23
Dec  6 05:12:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:12:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:12:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.nfs', 'default.rgw.meta', 'vms', 'images', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', '.mgr']
Dec  6 05:12:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:12:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:12:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v939: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Dec  6 05:12:24 np0005548915 nova_compute[254819]: 2025-12-06 10:12:24.511 254824 DEBUG nova.network.neutron [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Updated VIF entry in instance network info cache for port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:12:24 np0005548915 nova_compute[254819]: 2025-12-06 10:12:24.512 254824 DEBUG nova.network.neutron [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Updating instance_info_cache with network_info: [{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:12:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:12:24 np0005548915 nova_compute[254819]: 2025-12-06 10:12:24.540 254824 DEBUG oslo_concurrency.lockutils [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:12:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:12:24 np0005548915 nova_compute[254819]: 2025-12-06 10:12:24.823 254824 DEBUG nova.compute.manager [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received event network-vif-unplugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:12:24 np0005548915 nova_compute[254819]: 2025-12-06 10:12:24.824 254824 DEBUG oslo_concurrency.lockutils [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:24 np0005548915 nova_compute[254819]: 2025-12-06 10:12:24.824 254824 DEBUG oslo_concurrency.lockutils [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:24 np0005548915 nova_compute[254819]: 2025-12-06 10:12:24.824 254824 DEBUG oslo_concurrency.lockutils [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:24 np0005548915 nova_compute[254819]: 2025-12-06 10:12:24.825 254824 DEBUG nova.compute.manager [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] No waiting events found dispatching network-vif-unplugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:12:24 np0005548915 nova_compute[254819]: 2025-12-06 10:12:24.825 254824 DEBUG nova.compute.manager [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received event network-vif-unplugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 05:12:24 np0005548915 nova_compute[254819]: 2025-12-06 10:12:24.825 254824 DEBUG nova.compute.manager [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:12:24 np0005548915 nova_compute[254819]: 2025-12-06 10:12:24.826 254824 DEBUG oslo_concurrency.lockutils [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:24 np0005548915 nova_compute[254819]: 2025-12-06 10:12:24.826 254824 DEBUG oslo_concurrency.lockutils [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:24 np0005548915 nova_compute[254819]: 2025-12-06 10:12:24.826 254824 DEBUG oslo_concurrency.lockutils [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:24 np0005548915 nova_compute[254819]: 2025-12-06 10:12:24.827 254824 DEBUG nova.compute.manager [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] No waiting events found dispatching network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:12:24 np0005548915 nova_compute[254819]: 2025-12-06 10:12:24.827 254824 WARNING nova.compute.manager [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received unexpected event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for instance with vm_state active and task_state deleting.#033[00m
Dec  6 05:12:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:25 np0005548915 nova_compute[254819]: 2025-12-06 10:12:25.488 254824 DEBUG nova.network.neutron [-] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:12:25 np0005548915 nova_compute[254819]: 2025-12-06 10:12:25.507 254824 INFO nova.compute.manager [-] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Took 1.90 seconds to deallocate network for instance.#033[00m
Dec  6 05:12:25 np0005548915 nova_compute[254819]: 2025-12-06 10:12:25.556 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:25 np0005548915 nova_compute[254819]: 2025-12-06 10:12:25.556 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:25 np0005548915 nova_compute[254819]: 2025-12-06 10:12:25.620 254824 DEBUG oslo_concurrency.processutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:12:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:25.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:12:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:25.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:12:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/453190227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:12:26 np0005548915 nova_compute[254819]: 2025-12-06 10:12:26.074 254824 DEBUG oslo_concurrency.processutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:26 np0005548915 nova_compute[254819]: 2025-12-06 10:12:26.081 254824 DEBUG nova.compute.provider_tree [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:12:26 np0005548915 nova_compute[254819]: 2025-12-06 10:12:26.101 254824 DEBUG nova.scheduler.client.report [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:12:26 np0005548915 nova_compute[254819]: 2025-12-06 10:12:26.142 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:26 np0005548915 nova_compute[254819]: 2025-12-06 10:12:26.145 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:26 np0005548915 nova_compute[254819]: 2025-12-06 10:12:26.173 254824 INFO nova.scheduler.client.report [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971#033[00m
Dec  6 05:12:26 np0005548915 nova_compute[254819]: 2025-12-06 10:12:26.260 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v940: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Dec  6 05:12:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:27 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:27.643Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:12:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:27.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:27.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:28 np0005548915 nova_compute[254819]: 2025-12-06 10:12:28.190 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v941: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 103 op/s
Dec  6 05:12:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:29.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:12:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:29 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101229 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 05:12:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:29.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:29.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:12:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v942: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec  6 05:12:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:30] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec  6 05:12:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:30] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec  6 05:12:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:31 np0005548915 nova_compute[254819]: 2025-12-06 10:12:31.144 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:31.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:31.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v943: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec  6 05:12:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00012c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:33 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:33 np0005548915 nova_compute[254819]: 2025-12-06 10:12:33.193 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:12:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:33.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:12:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:33.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v944: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec  6 05:12:34 np0005548915 nova_compute[254819]: 2025-12-06 10:12:34.597 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:34 np0005548915 nova_compute[254819]: 2025-12-06 10:12:34.598 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:34 np0005548915 nova_compute[254819]: 2025-12-06 10:12:34.622 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 05:12:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a800 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:34 np0005548915 nova_compute[254819]: 2025-12-06 10:12:34.728 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:34 np0005548915 nova_compute[254819]: 2025-12-06 10:12:34.728 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:34 np0005548915 nova_compute[254819]: 2025-12-06 10:12:34.736 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 05:12:34 np0005548915 nova_compute[254819]: 2025-12-06 10:12:34.736 254824 INFO nova.compute.claims [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  6 05:12:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:12:34 np0005548915 nova_compute[254819]: 2025-12-06 10:12:34.835 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:35 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:12:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1575946514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.282 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.289 254824 DEBUG nova.compute.provider_tree [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.306 254824 DEBUG nova.scheduler.client.report [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.342 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.344 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.428 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.428 254824 DEBUG nova.network.neutron [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.451 254824 INFO nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.473 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.598 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.600 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.601 254824 INFO nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Creating image(s)#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.639 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.677 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:12:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:12:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:35.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:12:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:35.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.710 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.715 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.746 254824 DEBUG nova.policy [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.804 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.805 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.806 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.806 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.839 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:12:35 np0005548915 nova_compute[254819]: 2025-12-06 10:12:35.844 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 588b3b1f-9845-438c-89c4-744f95204b42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:36 np0005548915 nova_compute[254819]: 2025-12-06 10:12:36.134 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 588b3b1f-9845-438c-89c4-744f95204b42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:36 np0005548915 nova_compute[254819]: 2025-12-06 10:12:36.176 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:36 np0005548915 nova_compute[254819]: 2025-12-06 10:12:36.229 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 05:12:36 np0005548915 nova_compute[254819]: 2025-12-06 10:12:36.368 254824 DEBUG nova.objects.instance [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 588b3b1f-9845-438c-89c4-744f95204b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:12:36 np0005548915 nova_compute[254819]: 2025-12-06 10:12:36.384 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 05:12:36 np0005548915 nova_compute[254819]: 2025-12-06 10:12:36.384 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Ensure instance console log exists: /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 05:12:36 np0005548915 nova_compute[254819]: 2025-12-06 10:12:36.385 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:36 np0005548915 nova_compute[254819]: 2025-12-06 10:12:36.385 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:36 np0005548915 nova_compute[254819]: 2025-12-06 10:12:36.385 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v945: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec  6 05:12:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:37 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:37 np0005548915 nova_compute[254819]: 2025-12-06 10:12:37.596 254824 DEBUG nova.network.neutron [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Successfully updated port: 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 05:12:37 np0005548915 nova_compute[254819]: 2025-12-06 10:12:37.616 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-588b3b1f-9845-438c-89c4-744f95204b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:12:37 np0005548915 nova_compute[254819]: 2025-12-06 10:12:37.616 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-588b3b1f-9845-438c-89c4-744f95204b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:12:37 np0005548915 nova_compute[254819]: 2025-12-06 10:12:37.616 254824 DEBUG nova.network.neutron [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 05:12:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:37.644Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:12:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:12:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:37.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:12:37 np0005548915 nova_compute[254819]: 2025-12-06 10:12:37.708 254824 DEBUG nova.compute.manager [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received event network-changed-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:12:37 np0005548915 nova_compute[254819]: 2025-12-06 10:12:37.709 254824 DEBUG nova.compute.manager [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Refreshing instance network info cache due to event network-changed-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:12:37 np0005548915 nova_compute[254819]: 2025-12-06 10:12:37.709 254824 DEBUG oslo_concurrency.lockutils [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-588b3b1f-9845-438c-89c4-744f95204b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:12:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:37.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:37 np0005548915 nova_compute[254819]: 2025-12-06 10:12:37.759 254824 DEBUG nova.network.neutron [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.154 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765015943.1529648, 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.155 254824 INFO nova.compute.manager [-] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] VM Stopped (Lifecycle Event)#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.187 254824 DEBUG nova.compute.manager [None req-d52641f1-c4f0-4c75-bde4-5e021ca08454 - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.197 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:38 np0005548915 podman[272352]: 2025-12-06 10:12:38.454443433 +0000 UTC m=+0.081524840 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Dec  6 05:12:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v946: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Dec  6 05:12:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.771 254824 DEBUG nova.network.neutron [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Updating instance_info_cache with network_info: [{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.789 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-588b3b1f-9845-438c-89c4-744f95204b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.789 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Instance network_info: |[{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.791 254824 DEBUG oslo_concurrency.lockutils [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-588b3b1f-9845-438c-89c4-744f95204b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.791 254824 DEBUG nova.network.neutron [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Refreshing network info cache for port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.795 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Start _get_guest_xml network_info=[{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.800 254824 WARNING nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.807 254824 DEBUG nova.virt.libvirt.host [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.808 254824 DEBUG nova.virt.libvirt.host [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.816 254824 DEBUG nova.virt.libvirt.host [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.816 254824 DEBUG nova.virt.libvirt.host [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.817 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.817 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.818 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.818 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.818 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.819 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.819 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.819 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.820 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.820 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.820 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.820 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 05:12:38 np0005548915 nova_compute[254819]: 2025-12-06 10:12:38.825 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:12:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:12:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:39.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:12:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:39 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:12:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3480538889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.340 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.373 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.377 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:12:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:39.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:12:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:39.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:12:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169413128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:12:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.826 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.828 254824 DEBUG nova.virt.libvirt.vif [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1549098257',display_name='tempest-TestNetworkBasicOps-server-1549098257',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1549098257',id=9,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjlfXiWeP25/+Al9avXS7k5sTY7UpSTwvIPTlqQIhh0XClSeVPzmFV420fI5WFwr8qS2zHe5RQB0WDD7hpreK+FV5EzKAwwCW1d4oQG8NLOPL6t68qoP/9Hs+y9Im3qyA==',key_name='tempest-TestNetworkBasicOps-1342068066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-8kktnhof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:12:35Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=588b3b1f-9845-438c-89c4-744f95204b42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.828 254824 DEBUG nova.network.os_vif_util [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.829 254824 DEBUG nova.network.os_vif_util [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.830 254824 DEBUG nova.objects.instance [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 588b3b1f-9845-438c-89c4-744f95204b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.847 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] End _get_guest_xml xml=<domain type="kvm">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  <uuid>588b3b1f-9845-438c-89c4-744f95204b42</uuid>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  <name>instance-00000009</name>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  <memory>131072</memory>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  <vcpu>1</vcpu>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <nova:name>tempest-TestNetworkBasicOps-server-1549098257</nova:name>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <nova:creationTime>2025-12-06 10:12:38</nova:creationTime>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <nova:flavor name="m1.nano">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <nova:memory>128</nova:memory>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <nova:disk>1</nova:disk>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <nova:swap>0</nova:swap>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <nova:vcpus>1</nova:vcpus>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      </nova:flavor>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <nova:owner>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      </nova:owner>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <nova:ports>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <nova:port uuid="4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        </nova:port>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      </nova:ports>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    </nova:instance>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  <sysinfo type="smbios">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <entry name="manufacturer">RDO</entry>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <entry name="product">OpenStack Compute</entry>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <entry name="serial">588b3b1f-9845-438c-89c4-744f95204b42</entry>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <entry name="uuid">588b3b1f-9845-438c-89c4-744f95204b42</entry>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <entry name="family">Virtual Machine</entry>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <boot dev="hd"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <smbios mode="sysinfo"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <vmcoreinfo/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  <clock offset="utc">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <timer name="hpet" present="no"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  <cpu mode="host-model" match="exact">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <disk type="network" device="disk">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/588b3b1f-9845-438c-89c4-744f95204b42_disk">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <target dev="vda" bus="virtio"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <disk type="network" device="cdrom">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/588b3b1f-9845-438c-89c4-744f95204b42_disk.config">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <target dev="sda" bus="sata"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <interface type="ethernet">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <mac address="fa:16:3e:6f:25:fa"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <mtu size="1442"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <target dev="tap4c8ce68f-8a"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <serial type="pty">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <log file="/var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/console.log" append="off"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <input type="tablet" bus="usb"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <rng model="virtio">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <backend model="random">/dev/urandom</backend>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <controller type="usb" index="0"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    <memballoon model="virtio">
Dec  6 05:12:39 np0005548915 nova_compute[254819]:      <stats period="10"/>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:12:39 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:12:39 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:12:39 np0005548915 nova_compute[254819]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.847 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Preparing to wait for external event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.848 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.848 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.848 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.848 254824 DEBUG nova.virt.libvirt.vif [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1549098257',display_name='tempest-TestNetworkBasicOps-server-1549098257',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1549098257',id=9,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjlfXiWeP25/+Al9avXS7k5sTY7UpSTwvIPTlqQIhh0XClSeVPzmFV420fI5WFwr8qS2zHe5RQB0WDD7hpreK+FV5EzKAwwCW1d4oQG8NLOPL6t68qoP/9Hs+y9Im3qyA==',key_name='tempest-TestNetworkBasicOps-1342068066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-8kktnhof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:12:35Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=588b3b1f-9845-438c-89c4-744f95204b42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.849 254824 DEBUG nova.network.os_vif_util [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.849 254824 DEBUG nova.network.os_vif_util [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.849 254824 DEBUG os_vif [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.850 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.850 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.851 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.853 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.853 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c8ce68f-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.854 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4c8ce68f-8a, col_values=(('external_ids', {'iface-id': '4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:25:fa', 'vm-uuid': '588b3b1f-9845-438c-89c4-744f95204b42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:39 np0005548915 NetworkManager[48882]: <info>  [1765015959.9038] manager: (tap4c8ce68f-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.903 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.906 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.911 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.912 254824 INFO os_vif [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a')#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.973 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.974 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.974 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:6f:25:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 05:12:39 np0005548915 nova_compute[254819]: 2025-12-06 10:12:39.974 254824 INFO nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Using config drive#033[00m
Dec  6 05:12:40 np0005548915 nova_compute[254819]: 2025-12-06 10:12:40.010 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:12:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:12:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:12:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:12:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:12:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v947: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:12:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:12:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:12:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:12:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:12:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:12:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:12:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:12:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:12:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:12:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:12:40 np0005548915 nova_compute[254819]: 2025-12-06 10:12:40.324 254824 DEBUG nova.network.neutron [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Updated VIF entry in instance network info cache for port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:12:40 np0005548915 nova_compute[254819]: 2025-12-06 10:12:40.325 254824 DEBUG nova.network.neutron [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Updating instance_info_cache with network_info: [{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:12:40 np0005548915 nova_compute[254819]: 2025-12-06 10:12:40.343 254824 DEBUG oslo_concurrency.lockutils [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-588b3b1f-9845-438c-89c4-744f95204b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:12:40 np0005548915 nova_compute[254819]: 2025-12-06 10:12:40.464 254824 INFO nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Creating config drive at /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/disk.config#033[00m
Dec  6 05:12:40 np0005548915 nova_compute[254819]: 2025-12-06 10:12:40.471 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8tyko1dy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:40 np0005548915 nova_compute[254819]: 2025-12-06 10:12:40.595 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8tyko1dy" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:40 np0005548915 nova_compute[254819]: 2025-12-06 10:12:40.624 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:12:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:40 np0005548915 nova_compute[254819]: 2025-12-06 10:12:40.628 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/disk.config 588b3b1f-9845-438c-89c4-744f95204b42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:40 np0005548915 podman[272668]: 2025-12-06 10:12:40.752034568 +0000 UTC m=+0.041597261 container create 4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hawking, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 05:12:40 np0005548915 nova_compute[254819]: 2025-12-06 10:12:40.784 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/disk.config 588b3b1f-9845-438c-89c4-744f95204b42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:40 np0005548915 nova_compute[254819]: 2025-12-06 10:12:40.785 254824 INFO nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Deleting local config drive /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/disk.config because it was imported into RBD.#033[00m
Dec  6 05:12:40 np0005548915 systemd[1]: Started libpod-conmon-4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e.scope.
Dec  6 05:12:40 np0005548915 podman[272668]: 2025-12-06 10:12:40.734138721 +0000 UTC m=+0.023701434 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:12:40 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:12:40 np0005548915 kernel: tap4c8ce68f-8a: entered promiscuous mode
Dec  6 05:12:40 np0005548915 NetworkManager[48882]: <info>  [1765015960.8434] manager: (tap4c8ce68f-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Dec  6 05:12:40 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:40Z|00101|binding|INFO|Claiming lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for this chassis.
Dec  6 05:12:40 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:40Z|00102|binding|INFO|4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e: Claiming fa:16:3e:6f:25:fa 10.100.0.9
Dec  6 05:12:40 np0005548915 nova_compute[254819]: 2025-12-06 10:12:40.846 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:40 np0005548915 podman[272668]: 2025-12-06 10:12:40.853517641 +0000 UTC m=+0.143080354 container init 4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:12:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.855 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:25:fa 10.100.0.9'], port_security=['fa:16:3e:6f:25:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '588b3b1f-9845-438c-89c4-744f95204b42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1e7cc18e-31f3-4bdb-821d-1683a210c530', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=093e5b40-935f-42c8-a85f-385c1c7048be, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:12:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.856 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e in datapath c2ce21d9-e711-470f-89f6-0db58ded70b9 bound to our chassis#033[00m
Dec  6 05:12:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.857 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c2ce21d9-e711-470f-89f6-0db58ded70b9#033[00m
Dec  6 05:12:40 np0005548915 podman[272668]: 2025-12-06 10:12:40.865041729 +0000 UTC m=+0.154604422 container start 4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hawking, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:12:40 np0005548915 podman[272668]: 2025-12-06 10:12:40.869754996 +0000 UTC m=+0.159317689 container attach 4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:12:40 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:40Z|00103|binding|INFO|Setting lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e ovn-installed in OVS
Dec  6 05:12:40 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:40Z|00104|binding|INFO|Setting lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e up in Southbound
Dec  6 05:12:40 np0005548915 lucid_hawking[272687]: 167 167
Dec  6 05:12:40 np0005548915 systemd[1]: libpod-4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e.scope: Deactivated successfully.
Dec  6 05:12:40 np0005548915 podman[272668]: 2025-12-06 10:12:40.873175637 +0000 UTC m=+0.162738340 container died 4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:12:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.871 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0c47f031-5056-484a-ac8e-3b17b4af1392]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.872 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc2ce21d9-e1 in ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 05:12:40 np0005548915 nova_compute[254819]: 2025-12-06 10:12:40.875 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.877 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc2ce21d9-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 05:12:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.877 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[a3dfbfa5-66ea-4f64-aa7a-137559b5dd1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.879 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[35deb2f0-c615-4d31-a84c-0aad3d39d80d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:40 np0005548915 systemd-machined[216202]: New machine qemu-6-instance-00000009.
Dec  6 05:12:40 np0005548915 systemd[1]: Started Virtual Machine qemu-6-instance-00000009.
Dec  6 05:12:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.891 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[702b6e31-ed35-4b3c-93b4-ef423bc71668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:40] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Dec  6 05:12:40 np0005548915 systemd-udevd[272713]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:12:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:40] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Dec  6 05:12:40 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c399dfdb8edfc87f3fc79dd3c420a2a4c320c15a4237f8470a731242e769846f-merged.mount: Deactivated successfully.
Dec  6 05:12:40 np0005548915 NetworkManager[48882]: <info>  [1765015960.9195] device (tap4c8ce68f-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 05:12:40 np0005548915 NetworkManager[48882]: <info>  [1765015960.9206] device (tap4c8ce68f-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 05:12:40 np0005548915 podman[272668]: 2025-12-06 10:12:40.921835037 +0000 UTC m=+0.211397730 container remove 4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hawking, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:12:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.924 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[4b8822aa-2fcc-48b8-8b40-1b77c6cc40ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:40 np0005548915 systemd[1]: libpod-conmon-4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e.scope: Deactivated successfully.
Dec  6 05:12:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.962 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[f74f2d88-2175-4327-a3f1-d9731ea346ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:40 np0005548915 NetworkManager[48882]: <info>  [1765015960.9701] manager: (tapc2ce21d9-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Dec  6 05:12:40 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.972 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8e11f102-1ca8-4c9a-8220-07ff3c64922c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.015 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c8fbb5-7813-454e-a1bd-e6f47b1ae821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.021 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[bdfc9403-c896-4b42-87fb-8b4e166892b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:41 np0005548915 NetworkManager[48882]: <info>  [1765015961.0550] device (tapc2ce21d9-e0): carrier: link connected
Dec  6 05:12:41 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:12:41 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:12:41 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:12:41 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.060 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[58b78731-8a50-49d0-80d4-0144cb3f8cc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.082 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[942687c4-b87b-46bd-b35e-7492a053e677]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2ce21d9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:58:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433153, 'reachable_time': 37066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272756, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:41 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.100 254824 DEBUG nova.compute.manager [req-36a3df96-34c4-4dfc-968f-3952ed99be2f req-dc4e44df-2254-4ce0-ab92-e9f8b3da98e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.100 254824 DEBUG oslo_concurrency.lockutils [req-36a3df96-34c4-4dfc-968f-3952ed99be2f req-dc4e44df-2254-4ce0-ab92-e9f8b3da98e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.101 254824 DEBUG oslo_concurrency.lockutils [req-36a3df96-34c4-4dfc-968f-3952ed99be2f req-dc4e44df-2254-4ce0-ab92-e9f8b3da98e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.101 254824 DEBUG oslo_concurrency.lockutils [req-36a3df96-34c4-4dfc-968f-3952ed99be2f req-dc4e44df-2254-4ce0-ab92-e9f8b3da98e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.101 254824 DEBUG nova.compute.manager [req-36a3df96-34c4-4dfc-968f-3952ed99be2f req-dc4e44df-2254-4ce0-ab92-e9f8b3da98e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Processing event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.108 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[5875606a-2bc1-4b95-aa24-58590aa98390]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaf:5864'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 433153, 'tstamp': 433153}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272763, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.129 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[4c500899-988f-4102-97a2-65638e870f4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2ce21d9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:58:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433153, 'reachable_time': 37066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272769, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:41 np0005548915 podman[272754]: 2025-12-06 10:12:41.132868487 +0000 UTC m=+0.056163772 container create ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.150 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.173 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[d598bfe5-0f35-4dec-819d-98e47b63df80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:41 np0005548915 systemd[1]: Started libpod-conmon-ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599.scope.
Dec  6 05:12:41 np0005548915 podman[272754]: 2025-12-06 10:12:41.110131459 +0000 UTC m=+0.033426764 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:12:41 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:12:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707b1cd2d285236a77553f12195ea5045033040ec29f70fa510149f885d44ba1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:12:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707b1cd2d285236a77553f12195ea5045033040ec29f70fa510149f885d44ba1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:12:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707b1cd2d285236a77553f12195ea5045033040ec29f70fa510149f885d44ba1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:12:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707b1cd2d285236a77553f12195ea5045033040ec29f70fa510149f885d44ba1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:12:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707b1cd2d285236a77553f12195ea5045033040ec29f70fa510149f885d44ba1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:12:41 np0005548915 podman[272754]: 2025-12-06 10:12:41.245791286 +0000 UTC m=+0.169086591 container init ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.260 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc02cba-d3d5-4858-83d1-949f79bdfbe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.262 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2ce21d9-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.263 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.263 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2ce21d9-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:41 np0005548915 podman[272754]: 2025-12-06 10:12:41.264430863 +0000 UTC m=+0.187726168 container start ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  6 05:12:41 np0005548915 kernel: tapc2ce21d9-e0: entered promiscuous mode
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.265 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:41 np0005548915 podman[272754]: 2025-12-06 10:12:41.268191614 +0000 UTC m=+0.191486919 container attach ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  6 05:12:41 np0005548915 NetworkManager[48882]: <info>  [1765015961.2681] manager: (tapc2ce21d9-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.278 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc2ce21d9-e0, col_values=(('external_ids', {'iface-id': '52d33d15-d96f-4c26-a63e-0415fca27e6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:41 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:41Z|00105|binding|INFO|Releasing lport 52d33d15-d96f-4c26-a63e-0415fca27e6a from this chassis (sb_readonly=0)
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.280 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.282 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c2ce21d9-e711-470f-89f6-0db58ded70b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c2ce21d9-e711-470f-89f6-0db58ded70b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.283 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0b650734-047c-41ab-a05f-a13a3d664431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.284 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: global
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    log         /dev/log local0 debug
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    log-tag     haproxy-metadata-proxy-c2ce21d9-e711-470f-89f6-0db58ded70b9
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    user        root
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    group       root
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    maxconn     1024
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    pidfile     /var/lib/neutron/external/pids/c2ce21d9-e711-470f-89f6-0db58ded70b9.pid.haproxy
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    daemon
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: defaults
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    log global
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    mode http
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    option httplog
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    option dontlognull
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    option http-server-close
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    option forwardfor
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    retries                 3
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    timeout http-request    30s
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    timeout connect         30s
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    timeout client          32s
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    timeout server          32s
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    timeout http-keep-alive 30s
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: listen listener
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    bind 169.254.169.254:80
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]:    http-request add-header X-OVN-Network-ID c2ce21d9-e711-470f-89f6-0db58ded70b9
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 05:12:41 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.286 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'env', 'PROCESS_TAG=haproxy-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c2ce21d9-e711-470f-89f6-0db58ded70b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.296 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.323 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.326 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015961.3213038, 588b3b1f-9845-438c-89c4-744f95204b42 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.327 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] VM Started (Lifecycle Event)#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.334 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.340 254824 INFO nova.virt.libvirt.driver [-] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Instance spawned successfully.#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.342 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.365 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.375 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.383 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.383 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.384 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.385 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.385 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.385 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.410 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.410 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015961.3217614, 588b3b1f-9845-438c-89c4-744f95204b42 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.411 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] VM Paused (Lifecycle Event)#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.436 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.441 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015961.3327663, 588b3b1f-9845-438c-89c4-744f95204b42 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.441 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] VM Resumed (Lifecycle Event)#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.445 254824 INFO nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Took 5.85 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.446 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.457 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.461 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.486 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.529 254824 INFO nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Took 6.82 seconds to build instance.#033[00m
Dec  6 05:12:41 np0005548915 nova_compute[254819]: 2025-12-06 10:12:41.556 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:41 np0005548915 beautiful_gates[272809]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:12:41 np0005548915 beautiful_gates[272809]: --> All data devices are unavailable
Dec  6 05:12:41 np0005548915 systemd[1]: libpod-ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599.scope: Deactivated successfully.
Dec  6 05:12:41 np0005548915 podman[272754]: 2025-12-06 10:12:41.647948413 +0000 UTC m=+0.571243708 container died ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  6 05:12:41 np0005548915 systemd[1]: var-lib-containers-storage-overlay-707b1cd2d285236a77553f12195ea5045033040ec29f70fa510149f885d44ba1-merged.mount: Deactivated successfully.
Dec  6 05:12:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:41.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:41 np0005548915 podman[272754]: 2025-12-06 10:12:41.697529548 +0000 UTC m=+0.620824843 container remove ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  6 05:12:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:41.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:41 np0005548915 systemd[1]: libpod-conmon-ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599.scope: Deactivated successfully.
Dec  6 05:12:41 np0005548915 podman[272864]: 2025-12-06 10:12:41.754447019 +0000 UTC m=+0.069895689 container create 81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 05:12:41 np0005548915 systemd[1]: Started libpod-conmon-81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1.scope.
Dec  6 05:12:41 np0005548915 podman[272864]: 2025-12-06 10:12:41.722192508 +0000 UTC m=+0.037641228 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  6 05:12:41 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:12:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/030cc584701fc2ae4a0e4246d98dd4c32466f55ec245b3b9cff9771d81d5672e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 05:12:41 np0005548915 podman[272864]: 2025-12-06 10:12:41.854796032 +0000 UTC m=+0.170244722 container init 81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  6 05:12:41 np0005548915 podman[272864]: 2025-12-06 10:12:41.861714737 +0000 UTC m=+0.177163407 container start 81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:12:41 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [NOTICE]   (272923) : New worker (272942) forked
Dec  6 05:12:41 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [NOTICE]   (272923) : Loading success.
Dec  6 05:12:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v948: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:12:42 np0005548915 podman[272994]: 2025-12-06 10:12:42.421494957 +0000 UTC m=+0.054736123 container create 01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  6 05:12:42 np0005548915 systemd[1]: Started libpod-conmon-01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193.scope.
Dec  6 05:12:42 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:12:42 np0005548915 podman[272994]: 2025-12-06 10:12:42.402583762 +0000 UTC m=+0.035824958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:12:42 np0005548915 podman[272994]: 2025-12-06 10:12:42.516682141 +0000 UTC m=+0.149923317 container init 01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec  6 05:12:42 np0005548915 podman[272994]: 2025-12-06 10:12:42.525529828 +0000 UTC m=+0.158770994 container start 01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elgamal, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:12:42 np0005548915 podman[272994]: 2025-12-06 10:12:42.528390075 +0000 UTC m=+0.161631271 container attach 01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elgamal, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:12:42 np0005548915 adoring_elgamal[273010]: 167 167
Dec  6 05:12:42 np0005548915 systemd[1]: libpod-01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193.scope: Deactivated successfully.
Dec  6 05:12:42 np0005548915 podman[272994]: 2025-12-06 10:12:42.533680295 +0000 UTC m=+0.166921471 container died 01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elgamal, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  6 05:12:42 np0005548915 systemd[1]: var-lib-containers-storage-overlay-00b20e05dda0fcaf071dfcacb7a162261420d8334a6e6341a4950029f156cc8a-merged.mount: Deactivated successfully.
Dec  6 05:12:42 np0005548915 podman[272994]: 2025-12-06 10:12:42.567978693 +0000 UTC m=+0.201219889 container remove 01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elgamal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:12:42 np0005548915 systemd[1]: libpod-conmon-01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193.scope: Deactivated successfully.
Dec  6 05:12:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a860 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:42 np0005548915 podman[273036]: 2025-12-06 10:12:42.785610338 +0000 UTC m=+0.053976923 container create f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  6 05:12:42 np0005548915 systemd[1]: Started libpod-conmon-f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c.scope.
Dec  6 05:12:42 np0005548915 podman[273036]: 2025-12-06 10:12:42.759440959 +0000 UTC m=+0.027807594 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:12:42 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:12:42 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25a8435fef6b8002b408b06eb4080034d79f4a96d40c43571ee11fdfe74ddfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:12:42 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25a8435fef6b8002b408b06eb4080034d79f4a96d40c43571ee11fdfe74ddfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:12:42 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25a8435fef6b8002b408b06eb4080034d79f4a96d40c43571ee11fdfe74ddfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:12:42 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25a8435fef6b8002b408b06eb4080034d79f4a96d40c43571ee11fdfe74ddfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:12:42 np0005548915 podman[273036]: 2025-12-06 10:12:42.917026661 +0000 UTC m=+0.185393326 container init f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jackson, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 05:12:42 np0005548915 podman[273036]: 2025-12-06 10:12:42.924994164 +0000 UTC m=+0.193360749 container start f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jackson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  6 05:12:42 np0005548915 podman[273036]: 2025-12-06 10:12:42.928527758 +0000 UTC m=+0.196894433 container attach f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jackson, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.042 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.043 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.043 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.044 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.045 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.047 254824 INFO nova.compute.manager [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Terminating instance#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.049 254824 DEBUG nova.compute.manager [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 05:12:43 np0005548915 kernel: tap4c8ce68f-8a (unregistering): left promiscuous mode
Dec  6 05:12:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:43 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00030a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:43 np0005548915 NetworkManager[48882]: <info>  [1765015963.1071] device (tap4c8ce68f-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 05:12:43 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:43Z|00106|binding|INFO|Releasing lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e from this chassis (sb_readonly=0)
Dec  6 05:12:43 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:43Z|00107|binding|INFO|Setting lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e down in Southbound
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.117 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:43 np0005548915 ovn_controller[152417]: 2025-12-06T10:12:43Z|00108|binding|INFO|Removing iface tap4c8ce68f-8a ovn-installed in OVS
Dec  6 05:12:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.127 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:25:fa 10.100.0.9'], port_security=['fa:16:3e:6f:25:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '588b3b1f-9845-438c-89c4-744f95204b42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '9', 'neutron:security_group_ids': '1e7cc18e-31f3-4bdb-821d-1683a210c530', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.198', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=093e5b40-935f-42c8-a85f-385c1c7048be, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:12:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.129 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e in datapath c2ce21d9-e711-470f-89f6-0db58ded70b9 unbound from our chassis#033[00m
Dec  6 05:12:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.130 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c2ce21d9-e711-470f-89f6-0db58ded70b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 05:12:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.132 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a1282c-5488-4f73-a411-0540e282a538]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.132 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 namespace which is not needed anymore#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.143 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:43 np0005548915 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec  6 05:12:43 np0005548915 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000009.scope: Consumed 2.231s CPU time.
Dec  6 05:12:43 np0005548915 systemd-machined[216202]: Machine qemu-6-instance-00000009 terminated.
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.184 254824 DEBUG nova.compute.manager [req-06ec377c-8edc-4ae8-a493-83fd386227fe req-481b14d0-280e-4094-b2dc-46be495b2043 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.186 254824 DEBUG oslo_concurrency.lockutils [req-06ec377c-8edc-4ae8-a493-83fd386227fe req-481b14d0-280e-4094-b2dc-46be495b2043 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.187 254824 DEBUG oslo_concurrency.lockutils [req-06ec377c-8edc-4ae8-a493-83fd386227fe req-481b14d0-280e-4094-b2dc-46be495b2043 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.187 254824 DEBUG oslo_concurrency.lockutils [req-06ec377c-8edc-4ae8-a493-83fd386227fe req-481b14d0-280e-4094-b2dc-46be495b2043 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.187 254824 DEBUG nova.compute.manager [req-06ec377c-8edc-4ae8-a493-83fd386227fe req-481b14d0-280e-4094-b2dc-46be495b2043 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] No waiting events found dispatching network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.187 254824 WARNING nova.compute.manager [req-06ec377c-8edc-4ae8-a493-83fd386227fe req-481b14d0-280e-4094-b2dc-46be495b2043 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received unexpected event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for instance with vm_state active and task_state deleting.#033[00m
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]: {
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:    "1": [
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:        {
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:            "devices": [
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:                "/dev/loop3"
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:            ],
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:            "lv_name": "ceph_lv0",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:            "lv_size": "21470642176",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:            "name": "ceph_lv0",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:            "tags": {
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:                "ceph.cluster_name": "ceph",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:                "ceph.crush_device_class": "",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:                "ceph.encrypted": "0",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:                "ceph.osd_id": "1",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:                "ceph.type": "block",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:                "ceph.vdo": "0",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:                "ceph.with_tpm": "0"
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:            },
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:            "type": "block",
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:            "vg_name": "ceph_vg0"
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:        }
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]:    ]
Dec  6 05:12:43 np0005548915 recursing_jackson[273052]: }
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.274 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.279 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.289 254824 INFO nova.virt.libvirt.driver [-] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Instance destroyed successfully.#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.290 254824 DEBUG nova.objects.instance [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 588b3b1f-9845-438c-89c4-744f95204b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.301 254824 DEBUG nova.virt.libvirt.vif [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1549098257',display_name='tempest-TestNetworkBasicOps-server-1549098257',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1549098257',id=9,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjlfXiWeP25/+Al9avXS7k5sTY7UpSTwvIPTlqQIhh0XClSeVPzmFV420fI5WFwr8qS2zHe5RQB0WDD7hpreK+FV5EzKAwwCW1d4oQG8NLOPL6t68qoP/9Hs+y9Im3qyA==',key_name='tempest-TestNetworkBasicOps-1342068066',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:12:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-8kktnhof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:12:41Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=588b3b1f-9845-438c-89c4-744f95204b42,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.302 254824 DEBUG nova.network.os_vif_util [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.303 254824 DEBUG nova.network.os_vif_util [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.303 254824 DEBUG os_vif [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 05:12:43 np0005548915 systemd[1]: libpod-f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c.scope: Deactivated successfully.
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.306 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.307 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c8ce68f-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:43 np0005548915 podman[273036]: 2025-12-06 10:12:43.310203239 +0000 UTC m=+0.578569814 container died f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.312 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.316 254824 INFO os_vif [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a')#033[00m
Dec  6 05:12:43 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [NOTICE]   (272923) : haproxy version is 2.8.14-c23fe91
Dec  6 05:12:43 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [NOTICE]   (272923) : path to executable is /usr/sbin/haproxy
Dec  6 05:12:43 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [WARNING]  (272923) : Exiting Master process...
Dec  6 05:12:43 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [WARNING]  (272923) : Exiting Master process...
Dec  6 05:12:43 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [ALERT]    (272923) : Current worker (272942) exited with code 143 (Terminated)
Dec  6 05:12:43 np0005548915 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [WARNING]  (272923) : All workers exited. Exiting... (0)
Dec  6 05:12:43 np0005548915 systemd[1]: libpod-81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1.scope: Deactivated successfully.
Dec  6 05:12:43 np0005548915 podman[273084]: 2025-12-06 10:12:43.334099158 +0000 UTC m=+0.066452378 container died 81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  6 05:12:43 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d25a8435fef6b8002b408b06eb4080034d79f4a96d40c43571ee11fdfe74ddfb-merged.mount: Deactivated successfully.
Dec  6 05:12:43 np0005548915 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1-userdata-shm.mount: Deactivated successfully.
Dec  6 05:12:43 np0005548915 systemd[1]: var-lib-containers-storage-overlay-030cc584701fc2ae4a0e4246d98dd4c32466f55ec245b3b9cff9771d81d5672e-merged.mount: Deactivated successfully.
Dec  6 05:12:43 np0005548915 podman[273036]: 2025-12-06 10:12:43.384634858 +0000 UTC m=+0.653001443 container remove f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 05:12:43 np0005548915 podman[273084]: 2025-12-06 10:12:43.39069367 +0000 UTC m=+0.123046890 container cleanup 81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  6 05:12:43 np0005548915 systemd[1]: libpod-conmon-81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1.scope: Deactivated successfully.
Dec  6 05:12:43 np0005548915 systemd[1]: libpod-conmon-f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c.scope: Deactivated successfully.
Dec  6 05:12:43 np0005548915 podman[273162]: 2025-12-06 10:12:43.468569391 +0000 UTC m=+0.049019970 container remove 81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 05:12:43 np0005548915 podman[273140]: 2025-12-06 10:12:43.470162814 +0000 UTC m=+0.094085206 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 05:12:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.478 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[69ef7658-05b0-426f-83f3-55c42270fe56]: (4, ('Sat Dec  6 10:12:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 (81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1)\n81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1\nSat Dec  6 10:12:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 (81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1)\n81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.482 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[404700d2-cbee-4289-810b-4c95ceacc00f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.483 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2ce21d9-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.532 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:43 np0005548915 kernel: tapc2ce21d9-e0: left promiscuous mode
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.535 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.538 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e59d1786-18fe-410a-ad0a-e72af56c6d6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.553 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.558 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[021c0d26-4bf3-4257-a9ab-8401f52aebe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.560 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[03934674-8366-4dfc-9f62-2ce704013e34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.579 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe5f821-f1fc-4339-991c-304ce3cb05a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433144, 'reachable_time': 16542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273234, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:43 np0005548915 systemd[1]: run-netns-ovnmeta\x2dc2ce21d9\x2de711\x2d470f\x2d89f6\x2d0db58ded70b9.mount: Deactivated successfully.
Dec  6 05:12:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.585 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 05:12:43 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.585 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[fd1a13a6-3f24-43e3-8db1-ee3a6653d029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:12:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:12:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:43.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:12:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:43.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.734 254824 INFO nova.virt.libvirt.driver [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Deleting instance files /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42_del#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.735 254824 INFO nova.virt.libvirt.driver [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Deletion of /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42_del complete#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.880 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.881 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.881 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.881 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.881 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.906 254824 INFO nova.compute.manager [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.907 254824 DEBUG oslo.service.loopingcall [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.907 254824 DEBUG nova.compute.manager [-] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 05:12:43 np0005548915 nova_compute[254819]: 2025-12-06 10:12:43.908 254824 DEBUG nova.network.neutron [-] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 05:12:44 np0005548915 podman[273305]: 2025-12-06 10:12:44.06754037 +0000 UTC m=+0.045973289 container create 26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_haslett, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:12:44 np0005548915 systemd[1]: Started libpod-conmon-26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af.scope.
Dec  6 05:12:44 np0005548915 podman[273305]: 2025-12-06 10:12:44.047159345 +0000 UTC m=+0.025592294 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:12:44 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:12:44 np0005548915 podman[273305]: 2025-12-06 10:12:44.177892639 +0000 UTC m=+0.156325708 container init 26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_haslett, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 05:12:44 np0005548915 podman[273305]: 2025-12-06 10:12:44.186580542 +0000 UTC m=+0.165013441 container start 26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_haslett, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:12:44 np0005548915 podman[273305]: 2025-12-06 10:12:44.190064305 +0000 UTC m=+0.168497254 container attach 26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_haslett, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  6 05:12:44 np0005548915 serene_haslett[273324]: 167 167
Dec  6 05:12:44 np0005548915 systemd[1]: libpod-26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af.scope: Deactivated successfully.
Dec  6 05:12:44 np0005548915 podman[273305]: 2025-12-06 10:12:44.192861549 +0000 UTC m=+0.171294448 container died 26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  6 05:12:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v949: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Dec  6 05:12:44 np0005548915 systemd[1]: var-lib-containers-storage-overlay-13cbdb1bc3f525d1decc1c450ce49b569783fdc5ec360b025c321d6bdab447e4-merged.mount: Deactivated successfully.
Dec  6 05:12:44 np0005548915 podman[273305]: 2025-12-06 10:12:44.239973929 +0000 UTC m=+0.218406838 container remove 26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_haslett, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  6 05:12:44 np0005548915 systemd[1]: libpod-conmon-26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af.scope: Deactivated successfully.
Dec  6 05:12:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:12:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197267841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:12:44 np0005548915 nova_compute[254819]: 2025-12-06 10:12:44.353 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:44 np0005548915 podman[273348]: 2025-12-06 10:12:44.415552121 +0000 UTC m=+0.051433976 container create 3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  6 05:12:44 np0005548915 systemd[1]: Started libpod-conmon-3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07.scope.
Dec  6 05:12:44 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:12:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f08cd45621bb998d32e6e4cae0fa6d1c886bdd2f9aaa4cad450e4811a2740/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:12:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f08cd45621bb998d32e6e4cae0fa6d1c886bdd2f9aaa4cad450e4811a2740/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:12:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f08cd45621bb998d32e6e4cae0fa6d1c886bdd2f9aaa4cad450e4811a2740/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:12:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f08cd45621bb998d32e6e4cae0fa6d1c886bdd2f9aaa4cad450e4811a2740/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:12:44 np0005548915 podman[273348]: 2025-12-06 10:12:44.395609908 +0000 UTC m=+0.031491793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:12:44 np0005548915 podman[273348]: 2025-12-06 10:12:44.49819232 +0000 UTC m=+0.134074165 container init 3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:12:44 np0005548915 podman[273348]: 2025-12-06 10:12:44.512625235 +0000 UTC m=+0.148507130 container start 3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:12:44 np0005548915 podman[273348]: 2025-12-06 10:12:44.516622382 +0000 UTC m=+0.152504237 container attach 3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  6 05:12:44 np0005548915 nova_compute[254819]: 2025-12-06 10:12:44.529 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:12:44 np0005548915 nova_compute[254819]: 2025-12-06 10:12:44.532 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4447MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:12:44 np0005548915 nova_compute[254819]: 2025-12-06 10:12:44.532 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:44 np0005548915 nova_compute[254819]: 2025-12-06 10:12:44.533 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:44 np0005548915 nova_compute[254819]: 2025-12-06 10:12:44.615 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 588b3b1f-9845-438c-89c4-744f95204b42 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 05:12:44 np0005548915 nova_compute[254819]: 2025-12-06 10:12:44.616 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:12:44 np0005548915 nova_compute[254819]: 2025-12-06 10:12:44.616 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:12:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:44 np0005548915 nova_compute[254819]: 2025-12-06 10:12:44.731 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a880 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:12:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:44.897 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:12:44 np0005548915 nova_compute[254819]: 2025-12-06 10:12:44.898 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:44 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:44.900 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.014 254824 DEBUG nova.network.neutron [-] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.075 254824 INFO nova.compute.manager [-] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Took 1.17 seconds to deallocate network for instance.#033[00m
Dec  6 05:12:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.123 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:45 np0005548915 lvm[273460]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:12:45 np0005548915 lvm[273460]: VG ceph_vg0 finished
Dec  6 05:12:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:12:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2219237189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.250 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.257 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:12:45 np0005548915 sweet_diffie[273366]: {}
Dec  6 05:12:45 np0005548915 systemd[1]: libpod-3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07.scope: Deactivated successfully.
Dec  6 05:12:45 np0005548915 systemd[1]: libpod-3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07.scope: Consumed 1.315s CPU time.
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.408 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:12:45 np0005548915 podman[273466]: 2025-12-06 10:12:45.411134059 +0000 UTC m=+0.031733589 container died 3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.434 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.434 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.435 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:45 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f10f08cd45621bb998d32e6e4cae0fa6d1c886bdd2f9aaa4cad450e4811a2740-merged.mount: Deactivated successfully.
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.443 254824 DEBUG nova.compute.manager [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received event network-vif-unplugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.443 254824 DEBUG oslo_concurrency.lockutils [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.444 254824 DEBUG oslo_concurrency.lockutils [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.444 254824 DEBUG oslo_concurrency.lockutils [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.444 254824 DEBUG nova.compute.manager [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] No waiting events found dispatching network-vif-unplugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.444 254824 WARNING nova.compute.manager [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received unexpected event network-vif-unplugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for instance with vm_state deleted and task_state None.#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.445 254824 DEBUG nova.compute.manager [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.445 254824 DEBUG oslo_concurrency.lockutils [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.445 254824 DEBUG oslo_concurrency.lockutils [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.445 254824 DEBUG oslo_concurrency.lockutils [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.445 254824 DEBUG nova.compute.manager [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] No waiting events found dispatching network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.446 254824 WARNING nova.compute.manager [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received unexpected event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for instance with vm_state deleted and task_state None.#033[00m
Dec  6 05:12:45 np0005548915 podman[273466]: 2025-12-06 10:12:45.46688853 +0000 UTC m=+0.087488060 container remove 3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:12:45 np0005548915 systemd[1]: libpod-conmon-3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07.scope: Deactivated successfully.
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.494 254824 DEBUG oslo_concurrency.processutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:12:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:12:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:12:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:12:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:12:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:45.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:45.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:12:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784791036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.988 254824 DEBUG oslo_concurrency.processutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:12:45 np0005548915 nova_compute[254819]: 2025-12-06 10:12:45.997 254824 DEBUG nova.compute.provider_tree [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:12:46 np0005548915 nova_compute[254819]: 2025-12-06 10:12:46.020 254824 DEBUG nova.scheduler.client.report [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:12:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  6 05:12:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3659217674' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 05:12:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  6 05:12:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3659217674' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 05:12:46 np0005548915 nova_compute[254819]: 2025-12-06 10:12:46.044 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:46 np0005548915 nova_compute[254819]: 2025-12-06 10:12:46.087 254824 INFO nova.scheduler.client.report [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 588b3b1f-9845-438c-89c4-744f95204b42#033[00m
Dec  6 05:12:46 np0005548915 nova_compute[254819]: 2025-12-06 10:12:46.151 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:46 np0005548915 nova_compute[254819]: 2025-12-06 10:12:46.193 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v950: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Dec  6 05:12:46 np0005548915 nova_compute[254819]: 2025-12-06 10:12:46.435 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:12:46 np0005548915 nova_compute[254819]: 2025-12-06 10:12:46.436 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:12:46 np0005548915 nova_compute[254819]: 2025-12-06 10:12:46.436 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:12:46 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:12:46 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:12:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:46 np0005548915 nova_compute[254819]: 2025-12-06 10:12:46.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:12:46 np0005548915 nova_compute[254819]: 2025-12-06 10:12:46.748 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:12:46 np0005548915 nova_compute[254819]: 2025-12-06 10:12:46.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:12:46 np0005548915 nova_compute[254819]: 2025-12-06 10:12:46.762 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 05:12:46 np0005548915 nova_compute[254819]: 2025-12-06 10:12:46.763 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:12:46 np0005548915 nova_compute[254819]: 2025-12-06 10:12:46.763 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:12:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:47 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:47.645Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:12:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:12:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:47.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:12:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:47.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:47 np0005548915 nova_compute[254819]: 2025-12-06 10:12:47.756 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:12:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v951: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Dec  6 05:12:48 np0005548915 nova_compute[254819]: 2025-12-06 10:12:48.356 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:48 np0005548915 nova_compute[254819]: 2025-12-06 10:12:48.765 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:12:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:49.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:12:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:49 np0005548915 podman[273533]: 2025-12-06 10:12:49.49736924 +0000 UTC m=+0.107945786 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 05:12:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:49.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:12:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:49.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:12:49 np0005548915 nova_compute[254819]: 2025-12-06 10:12:49.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:12:49 np0005548915 nova_compute[254819]: 2025-12-06 10:12:49.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.819302) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015969819449, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1288, "num_deletes": 255, "total_data_size": 2258539, "memory_usage": 2305904, "flush_reason": "Manual Compaction"}
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015969841310, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2208581, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26987, "largest_seqno": 28273, "table_properties": {"data_size": 2202622, "index_size": 3222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12816, "raw_average_key_size": 19, "raw_value_size": 2190437, "raw_average_value_size": 3308, "num_data_blocks": 142, "num_entries": 662, "num_filter_entries": 662, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015855, "oldest_key_time": 1765015855, "file_creation_time": 1765015969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 22276 microseconds, and 7575 cpu microseconds.
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.841378) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2208581 bytes OK
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.841648) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.845148) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.845168) EVENT_LOG_v1 {"time_micros": 1765015969845162, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.845195) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2252820, prev total WAL file size 2252820, number of live WAL files 2.
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.846181) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2156KB)], [59(14MB)]
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015969846293, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 17656653, "oldest_snapshot_seqno": -1}
Dec  6 05:12:49 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:49.903 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6027 keys, 17524888 bytes, temperature: kUnknown
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015969987730, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 17524888, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17480988, "index_size": 27726, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 153672, "raw_average_key_size": 25, "raw_value_size": 17368468, "raw_average_value_size": 2881, "num_data_blocks": 1135, "num_entries": 6027, "num_filter_entries": 6027, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.988014) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17524888 bytes
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.989708) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.8 rd, 123.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 14.7 +0.0 blob) out(16.7 +0.0 blob), read-write-amplify(15.9) write-amplify(7.9) OK, records in: 6553, records dropped: 526 output_compression: NoCompression
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.989724) EVENT_LOG_v1 {"time_micros": 1765015969989716, "job": 32, "event": "compaction_finished", "compaction_time_micros": 141520, "compaction_time_cpu_micros": 50701, "output_level": 6, "num_output_files": 1, "total_output_size": 17524888, "num_input_records": 6553, "num_output_records": 6027, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015969990158, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015969992470, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.846006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.992560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.992566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.992568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.992570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:12:49 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.992572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:12:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v952: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 102 op/s
Dec  6 05:12:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:50] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Dec  6 05:12:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:50] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Dec  6 05:12:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:51 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:51 np0005548915 nova_compute[254819]: 2025-12-06 10:12:51.153 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:51.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v953: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec  6 05:12:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:53 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:53 np0005548915 nova_compute[254819]: 2025-12-06 10:12:53.359 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:53 np0005548915 nova_compute[254819]: 2025-12-06 10:12:53.599 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:53 np0005548915 nova_compute[254819]: 2025-12-06 10:12:53.681 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:12:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:53.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:12:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:12:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:53.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:12:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:12:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:12:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:12:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:12:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:12:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:12:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:12:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:12:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v954: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec  6 05:12:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:54.243 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:12:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:54.244 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:12:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:12:54.244 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:12:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:12:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:55 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:55.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:55.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:56 np0005548915 nova_compute[254819]: 2025-12-06 10:12:56.156 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v955: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec  6 05:12:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:57 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:57.646Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:12:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:12:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:57.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:12:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:12:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:57.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:12:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec  6 05:12:58 np0005548915 nova_compute[254819]: 2025-12-06 10:12:58.288 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765015963.2865648, 588b3b1f-9845-438c-89c4-744f95204b42 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:12:58 np0005548915 nova_compute[254819]: 2025-12-06 10:12:58.288 254824 INFO nova.compute.manager [-] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] VM Stopped (Lifecycle Event)#033[00m
Dec  6 05:12:58 np0005548915 nova_compute[254819]: 2025-12-06 10:12:58.306 254824 DEBUG nova.compute.manager [None req-2d55263f-51c1-44b6-932a-640fdd44757e - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:12:58 np0005548915 nova_compute[254819]: 2025-12-06 10:12:58.391 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:12:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:59.023Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:12:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:59 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:12:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:59.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:12:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:12:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:59.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:12:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:13:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:13:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:13:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:13:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:01 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:01 np0005548915 nova_compute[254819]: 2025-12-06 10:13:01.158 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.003000079s ======
Dec  6 05:13:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:01.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Dec  6 05:13:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:01.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:13:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:03 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:03 np0005548915 nova_compute[254819]: 2025-12-06 10:13:03.432 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:13:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:03.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:13:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:03.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 05:13:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:13:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:05 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:05.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:05.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:06 np0005548915 nova_compute[254819]: 2025-12-06 10:13:06.159 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:13:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:07 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:07.647Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:13:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:07.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:07.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 05:13:08 np0005548915 nova_compute[254819]: 2025-12-06 10:13:08.437 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:08 np0005548915 nova_compute[254819]: 2025-12-06 10:13:08.710 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:13:08 np0005548915 nova_compute[254819]: 2025-12-06 10:13:08.711 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:13:08 np0005548915 nova_compute[254819]: 2025-12-06 10:13:08.724 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 05:13:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:08 np0005548915 nova_compute[254819]: 2025-12-06 10:13:08.790 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:13:08 np0005548915 nova_compute[254819]: 2025-12-06 10:13:08.790 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:13:08 np0005548915 nova_compute[254819]: 2025-12-06 10:13:08.797 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 05:13:08 np0005548915 nova_compute[254819]: 2025-12-06 10:13:08.797 254824 INFO nova.compute.claims [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  6 05:13:08 np0005548915 nova_compute[254819]: 2025-12-06 10:13:08.881 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:13:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:13:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:13:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:09.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:13:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:09 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:13:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3673570831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.336 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.343 254824 DEBUG nova.compute.provider_tree [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.366 254824 DEBUG nova.scheduler.client.report [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.387 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.387 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.441 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.441 254824 DEBUG nova.network.neutron [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 05:13:09 np0005548915 podman[273623]: 2025-12-06 10:13:09.456881662 +0000 UTC m=+0.083928304 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.503 254824 INFO nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.519 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.587 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.588 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.588 254824 INFO nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Creating image(s)#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.617 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.649 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.677 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.680 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:13:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:09.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.731 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.732 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.733 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.733 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.756 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:13:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:09.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:09 np0005548915 nova_compute[254819]: 2025-12-06 10:13:09.759 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:13:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:13:10 np0005548915 nova_compute[254819]: 2025-12-06 10:13:10.011 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:13:10 np0005548915 nova_compute[254819]: 2025-12-06 10:13:10.073 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 05:13:10 np0005548915 nova_compute[254819]: 2025-12-06 10:13:10.167 254824 DEBUG nova.objects.instance [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid b735e225-377d-4f50-aae2-4bf5dd4eb9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:13:10 np0005548915 nova_compute[254819]: 2025-12-06 10:13:10.187 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 05:13:10 np0005548915 nova_compute[254819]: 2025-12-06 10:13:10.187 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Ensure instance console log exists: /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 05:13:10 np0005548915 nova_compute[254819]: 2025-12-06 10:13:10.188 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:13:10 np0005548915 nova_compute[254819]: 2025-12-06 10:13:10.188 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:13:10 np0005548915 nova_compute[254819]: 2025-12-06 10:13:10.188 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:13:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:13:10 np0005548915 nova_compute[254819]: 2025-12-06 10:13:10.391 254824 DEBUG nova.policy [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 05:13:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:13:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:13:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:11 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:11 np0005548915 nova_compute[254819]: 2025-12-06 10:13:11.164 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:11 np0005548915 nova_compute[254819]: 2025-12-06 10:13:11.621 254824 DEBUG nova.network.neutron [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Successfully created port: 923b504a-09da-476b-a8c8-c6c76c5e8343 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 05:13:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:11.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:13:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:11.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:13:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:13:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:13 np0005548915 nova_compute[254819]: 2025-12-06 10:13:13.061 254824 DEBUG nova.network.neutron [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Successfully updated port: 923b504a-09da-476b-a8c8-c6c76c5e8343 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 05:13:13 np0005548915 nova_compute[254819]: 2025-12-06 10:13:13.072 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:13:13 np0005548915 nova_compute[254819]: 2025-12-06 10:13:13.073 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:13:13 np0005548915 nova_compute[254819]: 2025-12-06 10:13:13.073 254824 DEBUG nova.network.neutron [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 05:13:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:13 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:13 np0005548915 nova_compute[254819]: 2025-12-06 10:13:13.174 254824 DEBUG nova.compute.manager [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-changed-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:13:13 np0005548915 nova_compute[254819]: 2025-12-06 10:13:13.174 254824 DEBUG nova.compute.manager [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Refreshing instance network info cache due to event network-changed-923b504a-09da-476b-a8c8-c6c76c5e8343. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:13:13 np0005548915 nova_compute[254819]: 2025-12-06 10:13:13.174 254824 DEBUG oslo_concurrency.lockutils [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:13:13 np0005548915 nova_compute[254819]: 2025-12-06 10:13:13.248 254824 DEBUG nova.network.neutron [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 05:13:13 np0005548915 nova_compute[254819]: 2025-12-06 10:13:13.441 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:13 np0005548915 podman[273838]: 2025-12-06 10:13:13.673723253 +0000 UTC m=+0.141120102 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec  6 05:13:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:13.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:13.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.107 254824 DEBUG nova.network.neutron [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updating instance_info_cache with network_info: [{"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.135 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.136 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Instance network_info: |[{"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.136 254824 DEBUG oslo_concurrency.lockutils [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.137 254824 DEBUG nova.network.neutron [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Refreshing network info cache for port 923b504a-09da-476b-a8c8-c6c76c5e8343 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.142 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Start _get_guest_xml network_info=[{"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.148 254824 WARNING nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.154 254824 DEBUG nova.virt.libvirt.host [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.154 254824 DEBUG nova.virt.libvirt.host [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.165 254824 DEBUG nova.virt.libvirt.host [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.166 254824 DEBUG nova.virt.libvirt.host [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.166 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.167 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.168 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.168 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.169 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.169 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.169 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.170 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.170 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.171 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.171 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.172 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.177 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:13:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v964: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:13:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:13:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1883968166' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:13:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.674 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.711 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:13:14 np0005548915 nova_compute[254819]: 2025-12-06 10:13:14.718 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:13:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:13:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:15 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:13:15 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/840050543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.199 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.202 254824 DEBUG nova.virt.libvirt.vif [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:13:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-767347043',display_name='tempest-TestNetworkBasicOps-server-767347043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-767347043',id=10,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKo7uC0irjYnKyVEGtEn/nYgythvknyTt45P5kPX1NZlUQ4NHagXOXCZs1+RjUHYK3oEDqvVo3L7WEeQEsh2SWgKD0PXaBMlx1FpXYkm1OxP+oK804aHcHmvv61DYBpjSw==',key_name='tempest-TestNetworkBasicOps-1442962553',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-rc0ojmmg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:13:09Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=b735e225-377d-4f50-aae2-4bf5dd4eb9fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.202 254824 DEBUG nova.network.os_vif_util [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.204 254824 DEBUG nova.network.os_vif_util [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ab:4e,bridge_name='br-int',has_traffic_filtering=True,id=923b504a-09da-476b-a8c8-c6c76c5e8343,network=Network(565d9ab5-f943-4873-8a20-970fba448d46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923b504a-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.205 254824 DEBUG nova.objects.instance [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid b735e225-377d-4f50-aae2-4bf5dd4eb9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.229 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] End _get_guest_xml xml=<domain type="kvm">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  <uuid>b735e225-377d-4f50-aae2-4bf5dd4eb9fa</uuid>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  <name>instance-0000000a</name>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  <memory>131072</memory>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  <vcpu>1</vcpu>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <nova:name>tempest-TestNetworkBasicOps-server-767347043</nova:name>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <nova:creationTime>2025-12-06 10:13:14</nova:creationTime>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <nova:flavor name="m1.nano">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <nova:memory>128</nova:memory>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <nova:disk>1</nova:disk>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <nova:swap>0</nova:swap>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <nova:vcpus>1</nova:vcpus>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      </nova:flavor>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <nova:owner>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      </nova:owner>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <nova:ports>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <nova:port uuid="923b504a-09da-476b-a8c8-c6c76c5e8343">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        </nova:port>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      </nova:ports>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    </nova:instance>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  <sysinfo type="smbios">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <entry name="manufacturer">RDO</entry>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <entry name="product">OpenStack Compute</entry>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <entry name="serial">b735e225-377d-4f50-aae2-4bf5dd4eb9fa</entry>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <entry name="uuid">b735e225-377d-4f50-aae2-4bf5dd4eb9fa</entry>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <entry name="family">Virtual Machine</entry>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <boot dev="hd"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <smbios mode="sysinfo"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <vmcoreinfo/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  <clock offset="utc">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <timer name="hpet" present="no"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  <cpu mode="host-model" match="exact">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <disk type="network" device="disk">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <target dev="vda" bus="virtio"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <disk type="network" device="cdrom">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk.config">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <target dev="sda" bus="sata"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <interface type="ethernet">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <mac address="fa:16:3e:b7:ab:4e"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <mtu size="1442"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <target dev="tap923b504a-09"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <serial type="pty">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <log file="/var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/console.log" append="off"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <input type="tablet" bus="usb"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <rng model="virtio">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <backend model="random">/dev/urandom</backend>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <controller type="usb" index="0"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    <memballoon model="virtio">
Dec  6 05:13:15 np0005548915 nova_compute[254819]:      <stats period="10"/>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:13:15 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:13:15 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:13:15 np0005548915 nova_compute[254819]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.230 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Preparing to wait for external event network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.231 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.231 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.231 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.232 254824 DEBUG nova.virt.libvirt.vif [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:13:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-767347043',display_name='tempest-TestNetworkBasicOps-server-767347043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-767347043',id=10,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKo7uC0irjYnKyVEGtEn/nYgythvknyTt45P5kPX1NZlUQ4NHagXOXCZs1+RjUHYK3oEDqvVo3L7WEeQEsh2SWgKD0PXaBMlx1FpXYkm1OxP+oK804aHcHmvv61DYBpjSw==',key_name='tempest-TestNetworkBasicOps-1442962553',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-rc0ojmmg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:13:09Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=b735e225-377d-4f50-aae2-4bf5dd4eb9fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.233 254824 DEBUG nova.network.os_vif_util [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.233 254824 DEBUG nova.network.os_vif_util [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ab:4e,bridge_name='br-int',has_traffic_filtering=True,id=923b504a-09da-476b-a8c8-c6c76c5e8343,network=Network(565d9ab5-f943-4873-8a20-970fba448d46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923b504a-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.234 254824 DEBUG os_vif [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ab:4e,bridge_name='br-int',has_traffic_filtering=True,id=923b504a-09da-476b-a8c8-c6c76c5e8343,network=Network(565d9ab5-f943-4873-8a20-970fba448d46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923b504a-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.234 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.235 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.235 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.238 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.239 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap923b504a-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.239 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap923b504a-09, col_values=(('external_ids', {'iface-id': '923b504a-09da-476b-a8c8-c6c76c5e8343', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:ab:4e', 'vm-uuid': 'b735e225-377d-4f50-aae2-4bf5dd4eb9fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.240 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:15 np0005548915 NetworkManager[48882]: <info>  [1765015995.2417] manager: (tap923b504a-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.243 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.248 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.249 254824 INFO os_vif [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ab:4e,bridge_name='br-int',has_traffic_filtering=True,id=923b504a-09da-476b-a8c8-c6c76c5e8343,network=Network(565d9ab5-f943-4873-8a20-970fba448d46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923b504a-09')#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.290 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.290 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.291 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:b7:ab:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.291 254824 INFO nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Using config drive#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.317 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.345 254824 DEBUG nova.network.neutron [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updated VIF entry in instance network info cache for port 923b504a-09da-476b-a8c8-c6c76c5e8343. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.345 254824 DEBUG nova.network.neutron [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updating instance_info_cache with network_info: [{"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.357 254824 DEBUG oslo_concurrency.lockutils [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.619 254824 INFO nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Creating config drive at /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/disk.config#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.630 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf5q5fp3b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:13:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:15.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.764 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf5q5fp3b" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:13:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:15.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.800 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.804 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/disk.config b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.984 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/disk.config b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:13:15 np0005548915 nova_compute[254819]: 2025-12-06 10:13:15.985 254824 INFO nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Deleting local config drive /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/disk.config because it was imported into RBD.#033[00m
Dec  6 05:13:16 np0005548915 kernel: tap923b504a-09: entered promiscuous mode
Dec  6 05:13:16 np0005548915 NetworkManager[48882]: <info>  [1765015996.0476] manager: (tap923b504a-09): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Dec  6 05:13:16 np0005548915 ovn_controller[152417]: 2025-12-06T10:13:16Z|00109|binding|INFO|Claiming lport 923b504a-09da-476b-a8c8-c6c76c5e8343 for this chassis.
Dec  6 05:13:16 np0005548915 ovn_controller[152417]: 2025-12-06T10:13:16Z|00110|binding|INFO|923b504a-09da-476b-a8c8-c6c76c5e8343: Claiming fa:16:3e:b7:ab:4e 10.100.0.5
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.066 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.081 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:ab:4e 10.100.0.5'], port_security=['fa:16:3e:b7:ab:4e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b735e225-377d-4f50-aae2-4bf5dd4eb9fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-565d9ab5-f943-4873-8a20-970fba448d46', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '07ac2c97-c1ea-402b-a4af-4b99fec7720e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86948586-50a4-4571-ad91-ae78b72ed8de, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=923b504a-09da-476b-a8c8-c6c76c5e8343) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.082 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 923b504a-09da-476b-a8c8-c6c76c5e8343 in datapath 565d9ab5-f943-4873-8a20-970fba448d46 bound to our chassis#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.083 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 565d9ab5-f943-4873-8a20-970fba448d46#033[00m
Dec  6 05:13:16 np0005548915 systemd-udevd[274001]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.099 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7ffb27-081d-4714-93ca-5438ab16999c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.100 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap565d9ab5-f1 in ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.102 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap565d9ab5-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.102 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[6d535c95-1ad6-4716-bf42-4bc3b1fc978c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.103 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f7ea15-171d-444c-a516-b9d961ffafb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 NetworkManager[48882]: <info>  [1765015996.1098] device (tap923b504a-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 05:13:16 np0005548915 NetworkManager[48882]: <info>  [1765015996.1110] device (tap923b504a-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.116 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[05555532-ed56-4482-8abe-b200a48379f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 systemd-machined[216202]: New machine qemu-7-instance-0000000a.
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.147 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ed4a6a-cb09-46ec-9db2-e35f79ef3b29]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.149 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:16 np0005548915 systemd[1]: Started Virtual Machine qemu-7-instance-0000000a.
Dec  6 05:13:16 np0005548915 ovn_controller[152417]: 2025-12-06T10:13:16Z|00111|binding|INFO|Setting lport 923b504a-09da-476b-a8c8-c6c76c5e8343 ovn-installed in OVS
Dec  6 05:13:16 np0005548915 ovn_controller[152417]: 2025-12-06T10:13:16Z|00112|binding|INFO|Setting lport 923b504a-09da-476b-a8c8-c6c76c5e8343 up in Southbound
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.156 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.165 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.177 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[a61a9882-11d7-4fd0-9b34-538edd075b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 NetworkManager[48882]: <info>  [1765015996.1871] manager: (tap565d9ab5-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.186 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f5dd67-29be-4517-bfb7-0723f6b2f87b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v965: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.238 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[2f504f34-0d07-4502-ad5a-da85fe319690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.242 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[2a120d21-462a-46ae-a391-4cc3a6c0dda5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 NetworkManager[48882]: <info>  [1765015996.2730] device (tap565d9ab5-f0): carrier: link connected
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.281 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[649d8d65-36de-420e-9f4a-8dd7964ad300]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.302 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7da3e5-1c80-4eed-9ba4-538372c97d0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap565d9ab5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d4:2f:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436675, 'reachable_time': 42843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274037, 'error': None, 'target': 'ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.319 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[3962fbba-b32b-4ac3-8811-834635a53b63]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed4:2f30'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436675, 'tstamp': 436675}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274038, 'error': None, 'target': 'ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.338 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb912b4-ce20-4ceb-9d11-5de09cd6da78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap565d9ab5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d4:2f:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436675, 'reachable_time': 42843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274039, 'error': None, 'target': 'ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.363 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[89caefdd-2acd-4847-9c94-e28f8137fbc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.438 254824 DEBUG nova.compute.manager [req-d874fb07-ad3c-453c-9bdb-cd4ddc135393 req-8506cc0e-ce53-4b50-bb10-9dec860efed5 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.439 254824 DEBUG oslo_concurrency.lockutils [req-d874fb07-ad3c-453c-9bdb-cd4ddc135393 req-8506cc0e-ce53-4b50-bb10-9dec860efed5 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.440 254824 DEBUG oslo_concurrency.lockutils [req-d874fb07-ad3c-453c-9bdb-cd4ddc135393 req-8506cc0e-ce53-4b50-bb10-9dec860efed5 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.440 254824 DEBUG oslo_concurrency.lockutils [req-d874fb07-ad3c-453c-9bdb-cd4ddc135393 req-8506cc0e-ce53-4b50-bb10-9dec860efed5 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.441 254824 DEBUG nova.compute.manager [req-d874fb07-ad3c-453c-9bdb-cd4ddc135393 req-8506cc0e-ce53-4b50-bb10-9dec860efed5 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Processing event network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.454 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[17b23f74-7bde-47f7-aafc-614a3c8ba420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.456 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap565d9ab5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.456 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.456 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap565d9ab5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:13:16 np0005548915 kernel: tap565d9ab5-f0: entered promiscuous mode
Dec  6 05:13:16 np0005548915 NetworkManager[48882]: <info>  [1765015996.4598] manager: (tap565d9ab5-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.458 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.462 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap565d9ab5-f0, col_values=(('external_ids', {'iface-id': '6aa255c1-2a72-4002-8ac0-9542a75d99f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:13:16 np0005548915 ovn_controller[152417]: 2025-12-06T10:13:16Z|00113|binding|INFO|Releasing lport 6aa255c1-2a72-4002-8ac0-9542a75d99f5 from this chassis (sb_readonly=0)
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.464 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.466 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/565d9ab5-f943-4873-8a20-970fba448d46.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/565d9ab5-f943-4873-8a20-970fba448d46.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.467 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[c5592908-9f74-472f-9999-e624c8329ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.467 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: global
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    log         /dev/log local0 debug
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    log-tag     haproxy-metadata-proxy-565d9ab5-f943-4873-8a20-970fba448d46
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    user        root
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    group       root
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    maxconn     1024
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    pidfile     /var/lib/neutron/external/pids/565d9ab5-f943-4873-8a20-970fba448d46.pid.haproxy
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    daemon
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: defaults
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    log global
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    mode http
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    option httplog
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    option dontlognull
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    option http-server-close
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    option forwardfor
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    retries                 3
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    timeout http-request    30s
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    timeout connect         30s
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    timeout client          32s
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    timeout server          32s
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    timeout http-keep-alive 30s
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: listen listener
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    bind 169.254.169.254:80
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]:    http-request add-header X-OVN-Network-ID 565d9ab5-f943-4873-8a20-970fba448d46
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 05:13:16 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.469 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46', 'env', 'PROCESS_TAG=haproxy-565d9ab5-f943-4873-8a20-970fba448d46', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/565d9ab5-f943-4873-8a20-970fba448d46.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.480 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.825 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015996.8246639, b735e225-377d-4f50-aae2-4bf5dd4eb9fa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.826 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] VM Started (Lifecycle Event)#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.830 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.835 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.839 254824 INFO nova.virt.libvirt.driver [-] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Instance spawned successfully.#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.840 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.847 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.852 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:13:16 np0005548915 podman[274111]: 2025-12-06 10:13:16.864307135 +0000 UTC m=+0.079468234 container create 46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.873 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.874 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.875 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.875 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.876 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.876 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.881 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.882 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015996.8252559, b735e225-377d-4f50-aae2-4bf5dd4eb9fa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.882 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] VM Paused (Lifecycle Event)#033[00m
Dec  6 05:13:16 np0005548915 podman[274111]: 2025-12-06 10:13:16.827949893 +0000 UTC m=+0.043111022 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.919 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.926 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015996.8341808, b735e225-377d-4f50-aae2-4bf5dd4eb9fa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.927 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] VM Resumed (Lifecycle Event)#033[00m
Dec  6 05:13:16 np0005548915 systemd[1]: Started libpod-conmon-46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128.scope.
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.955 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.962 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:13:16 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.970 254824 INFO nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Took 7.38 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.970 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:13:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9928877ca56a0e966d3eea9b89794c7d2e32547dafcfc2eff997d385c12891b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 05:13:16 np0005548915 nova_compute[254819]: 2025-12-06 10:13:16.988 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:13:16 np0005548915 podman[274111]: 2025-12-06 10:13:16.998376668 +0000 UTC m=+0.213537867 container init 46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 05:13:17 np0005548915 podman[274111]: 2025-12-06 10:13:17.004138912 +0000 UTC m=+0.219300051 container start 46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  6 05:13:17 np0005548915 neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46[274127]: [NOTICE]   (274131) : New worker (274133) forked
Dec  6 05:13:17 np0005548915 neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46[274127]: [NOTICE]   (274131) : Loading success.
Dec  6 05:13:17 np0005548915 nova_compute[254819]: 2025-12-06 10:13:17.051 254824 INFO nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Took 8.29 seconds to build instance.#033[00m
Dec  6 05:13:17 np0005548915 nova_compute[254819]: 2025-12-06 10:13:17.071 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:13:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:17 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:17.648Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:13:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:17.649Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:13:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:13:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:17.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:13:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:17.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v966: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec  6 05:13:18 np0005548915 nova_compute[254819]: 2025-12-06 10:13:18.511 254824 DEBUG nova.compute.manager [req-8e3dfe6b-c681-434b-a8e8-830b9512cde0 req-9503adb6-3fdf-40f3-8bbd-a77cd09db5b4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:13:18 np0005548915 nova_compute[254819]: 2025-12-06 10:13:18.511 254824 DEBUG oslo_concurrency.lockutils [req-8e3dfe6b-c681-434b-a8e8-830b9512cde0 req-9503adb6-3fdf-40f3-8bbd-a77cd09db5b4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:13:18 np0005548915 nova_compute[254819]: 2025-12-06 10:13:18.512 254824 DEBUG oslo_concurrency.lockutils [req-8e3dfe6b-c681-434b-a8e8-830b9512cde0 req-9503adb6-3fdf-40f3-8bbd-a77cd09db5b4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:13:18 np0005548915 nova_compute[254819]: 2025-12-06 10:13:18.512 254824 DEBUG oslo_concurrency.lockutils [req-8e3dfe6b-c681-434b-a8e8-830b9512cde0 req-9503adb6-3fdf-40f3-8bbd-a77cd09db5b4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:13:18 np0005548915 nova_compute[254819]: 2025-12-06 10:13:18.512 254824 DEBUG nova.compute.manager [req-8e3dfe6b-c681-434b-a8e8-830b9512cde0 req-9503adb6-3fdf-40f3-8bbd-a77cd09db5b4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] No waiting events found dispatching network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:13:18 np0005548915 nova_compute[254819]: 2025-12-06 10:13:18.512 254824 WARNING nova.compute.manager [req-8e3dfe6b-c681-434b-a8e8-830b9512cde0 req-9503adb6-3fdf-40f3-8bbd-a77cd09db5b4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received unexpected event network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 for instance with vm_state active and task_state None.#033[00m
Dec  6 05:13:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:19.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:13:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:19.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:13:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:19.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:13:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:19.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:13:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:13:19 np0005548915 nova_compute[254819]: 2025-12-06 10:13:19.919 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:19 np0005548915 NetworkManager[48882]: <info>  [1765015999.9215] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Dec  6 05:13:19 np0005548915 NetworkManager[48882]: <info>  [1765015999.9226] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Dec  6 05:13:19 np0005548915 ovn_controller[152417]: 2025-12-06T10:13:19Z|00114|binding|INFO|Releasing lport 6aa255c1-2a72-4002-8ac0-9542a75d99f5 from this chassis (sb_readonly=0)
Dec  6 05:13:19 np0005548915 nova_compute[254819]: 2025-12-06 10:13:19.955 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:19 np0005548915 ovn_controller[152417]: 2025-12-06T10:13:19Z|00115|binding|INFO|Releasing lport 6aa255c1-2a72-4002-8ac0-9542a75d99f5 from this chassis (sb_readonly=0)
Dec  6 05:13:19 np0005548915 nova_compute[254819]: 2025-12-06 10:13:19.959 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v967: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec  6 05:13:20 np0005548915 nova_compute[254819]: 2025-12-06 10:13:20.241 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:20 np0005548915 nova_compute[254819]: 2025-12-06 10:13:20.329 254824 DEBUG nova.compute.manager [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-changed-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:13:20 np0005548915 nova_compute[254819]: 2025-12-06 10:13:20.329 254824 DEBUG nova.compute.manager [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Refreshing instance network info cache due to event network-changed-923b504a-09da-476b-a8c8-c6c76c5e8343. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:13:20 np0005548915 nova_compute[254819]: 2025-12-06 10:13:20.330 254824 DEBUG oslo_concurrency.lockutils [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:13:20 np0005548915 nova_compute[254819]: 2025-12-06 10:13:20.330 254824 DEBUG oslo_concurrency.lockutils [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:13:20 np0005548915 nova_compute[254819]: 2025-12-06 10:13:20.330 254824 DEBUG nova.network.neutron [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Refreshing network info cache for port 923b504a-09da-476b-a8c8-c6c76c5e8343 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:13:20 np0005548915 podman[274147]: 2025-12-06 10:13:20.430798224 +0000 UTC m=+0.056642135 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:13:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:13:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:13:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:21 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:21 np0005548915 nova_compute[254819]: 2025-12-06 10:13:21.167 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:21 np0005548915 nova_compute[254819]: 2025-12-06 10:13:21.219 254824 DEBUG nova.network.neutron [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updated VIF entry in instance network info cache for port 923b504a-09da-476b-a8c8-c6c76c5e8343. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:13:21 np0005548915 nova_compute[254819]: 2025-12-06 10:13:21.220 254824 DEBUG nova.network.neutron [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updating instance_info_cache with network_info: [{"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:13:21 np0005548915 nova_compute[254819]: 2025-12-06 10:13:21.242 254824 DEBUG oslo_concurrency.lockutils [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:13:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:21.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:21.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v968: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec  6 05:13:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:23 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:23.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:23.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:13:23
Dec  6 05:13:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:13:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:13:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.log', '.nfs', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.control']
Dec  6 05:13:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:13:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:13:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v969: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:13:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:13:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:13:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:25 np0005548915 nova_compute[254819]: 2025-12-06 10:13:25.243 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:13:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:25.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:13:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:25.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:26 np0005548915 nova_compute[254819]: 2025-12-06 10:13:26.169 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v970: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  6 05:13:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4001dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:27 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:27.649Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:13:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:27.650Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:13:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:27.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:27.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:27 np0005548915 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec  6 05:13:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v971: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  6 05:13:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101328 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:13:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4001dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:29.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:13:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:29 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:29.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:29.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:13:29 np0005548915 ovn_controller[152417]: 2025-12-06T10:13:29Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:ab:4e 10.100.0.5
Dec  6 05:13:29 np0005548915 ovn_controller[152417]: 2025-12-06T10:13:29Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:ab:4e 10.100.0.5
Dec  6 05:13:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v972: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 634 KiB/s rd, 20 op/s
Dec  6 05:13:30 np0005548915 nova_compute[254819]: 2025-12-06 10:13:30.246 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:30] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec  6 05:13:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:30] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec  6 05:13:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:31 np0005548915 nova_compute[254819]: 2025-12-06 10:13:31.171 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:13:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:31.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:13:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:31.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v973: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 634 KiB/s rd, 20 op/s
Dec  6 05:13:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101333 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:13:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:33 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:13:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:33.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:13:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:33.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v974: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 935 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Dec  6 05:13:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:13:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:35 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:35 np0005548915 nova_compute[254819]: 2025-12-06 10:13:35.247 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:13:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:35.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:13:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:35.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:35 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 05:13:35 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 05:13:36 np0005548915 nova_compute[254819]: 2025-12-06 10:13:36.216 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v975: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  6 05:13:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:36 np0005548915 nova_compute[254819]: 2025-12-06 10:13:36.839 254824 INFO nova.compute.manager [None req-12e67c13-a2bc-4851-b629-051195e0d4aa 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Get console output#033[00m
Dec  6 05:13:36 np0005548915 nova_compute[254819]: 2025-12-06 10:13:36.846 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 05:13:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:37 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:37.651Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:13:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:37.651Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:13:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:13:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:37.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:13:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:13:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:37.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:13:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:37 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 05:13:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v976: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  6 05:13:38 np0005548915 ovn_controller[152417]: 2025-12-06T10:13:38Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:ab:4e 10.100.0.5
Dec  6 05:13:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:13:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:13:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:39.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:13:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:39 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:13:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:39.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:13:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:39.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:13:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v977: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  6 05:13:40 np0005548915 nova_compute[254819]: 2025-12-06 10:13:40.251 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:40 np0005548915 podman[274214]: 2025-12-06 10:13:40.437728808 +0000 UTC m=+0.065745746 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd)
Dec  6 05:13:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:13:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:13:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:40] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec  6 05:13:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:40] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec  6 05:13:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:41 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:41 np0005548915 nova_compute[254819]: 2025-12-06 10:13:41.220 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:41 np0005548915 ovn_controller[152417]: 2025-12-06T10:13:41Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:ab:4e 10.100.0.5
Dec  6 05:13:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:13:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:41.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:13:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:41.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.009 254824 DEBUG nova.compute.manager [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-changed-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.009 254824 DEBUG nova.compute.manager [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Refreshing instance network info cache due to event network-changed-923b504a-09da-476b-a8c8-c6c76c5e8343. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.010 254824 DEBUG oslo_concurrency.lockutils [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.010 254824 DEBUG oslo_concurrency.lockutils [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.011 254824 DEBUG nova.network.neutron [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Refreshing network info cache for port 923b504a-09da-476b-a8c8-c6c76c5e8343 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.123 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.124 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.124 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.124 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.124 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.125 254824 INFO nova.compute.manager [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Terminating instance#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.126 254824 DEBUG nova.compute.manager [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 05:13:42 np0005548915 kernel: tap923b504a-09 (unregistering): left promiscuous mode
Dec  6 05:13:42 np0005548915 NetworkManager[48882]: <info>  [1765016022.1903] device (tap923b504a-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 05:13:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v978: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  6 05:13:42 np0005548915 ovn_controller[152417]: 2025-12-06T10:13:42Z|00116|binding|INFO|Releasing lport 923b504a-09da-476b-a8c8-c6c76c5e8343 from this chassis (sb_readonly=0)
Dec  6 05:13:42 np0005548915 ovn_controller[152417]: 2025-12-06T10:13:42Z|00117|binding|INFO|Setting lport 923b504a-09da-476b-a8c8-c6c76c5e8343 down in Southbound
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.236 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:42 np0005548915 ovn_controller[152417]: 2025-12-06T10:13:42Z|00118|binding|INFO|Removing iface tap923b504a-09 ovn-installed in OVS
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.238 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.246 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:ab:4e 10.100.0.5'], port_security=['fa:16:3e:b7:ab:4e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b735e225-377d-4f50-aae2-4bf5dd4eb9fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-565d9ab5-f943-4873-8a20-970fba448d46', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '07ac2c97-c1ea-402b-a4af-4b99fec7720e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86948586-50a4-4571-ad91-ae78b72ed8de, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=923b504a-09da-476b-a8c8-c6c76c5e8343) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:13:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.247 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 923b504a-09da-476b-a8c8-c6c76c5e8343 in datapath 565d9ab5-f943-4873-8a20-970fba448d46 unbound from our chassis#033[00m
Dec  6 05:13:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.249 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 565d9ab5-f943-4873-8a20-970fba448d46, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 05:13:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.251 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[639e7267-8362-4c6a-8316-ac7ffdd523eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.252 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46 namespace which is not needed anymore#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.252 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:42 np0005548915 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec  6 05:13:42 np0005548915 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000a.scope: Consumed 14.560s CPU time.
Dec  6 05:13:42 np0005548915 systemd-machined[216202]: Machine qemu-7-instance-0000000a terminated.
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.359 254824 INFO nova.virt.libvirt.driver [-] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Instance destroyed successfully.#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.360 254824 DEBUG nova.objects.instance [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid b735e225-377d-4f50-aae2-4bf5dd4eb9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:13:42 np0005548915 neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46[274127]: [NOTICE]   (274131) : haproxy version is 2.8.14-c23fe91
Dec  6 05:13:42 np0005548915 neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46[274127]: [NOTICE]   (274131) : path to executable is /usr/sbin/haproxy
Dec  6 05:13:42 np0005548915 neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46[274127]: [WARNING]  (274131) : Exiting Master process...
Dec  6 05:13:42 np0005548915 neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46[274127]: [ALERT]    (274131) : Current worker (274133) exited with code 143 (Terminated)
Dec  6 05:13:42 np0005548915 neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46[274127]: [WARNING]  (274131) : All workers exited. Exiting... (0)
Dec  6 05:13:42 np0005548915 systemd[1]: libpod-46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128.scope: Deactivated successfully.
Dec  6 05:13:42 np0005548915 podman[274261]: 2025-12-06 10:13:42.383548527 +0000 UTC m=+0.044523143 container died 46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.392 254824 DEBUG nova.virt.libvirt.vif [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:13:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-767347043',display_name='tempest-TestNetworkBasicOps-server-767347043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-767347043',id=10,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKo7uC0irjYnKyVEGtEn/nYgythvknyTt45P5kPX1NZlUQ4NHagXOXCZs1+RjUHYK3oEDqvVo3L7WEeQEsh2SWgKD0PXaBMlx1FpXYkm1OxP+oK804aHcHmvv61DYBpjSw==',key_name='tempest-TestNetworkBasicOps-1442962553',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:13:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-rc0ojmmg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:13:17Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=b735e225-377d-4f50-aae2-4bf5dd4eb9fa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.393 254824 DEBUG nova.network.os_vif_util [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.393 254824 DEBUG nova.network.os_vif_util [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:ab:4e,bridge_name='br-int',has_traffic_filtering=True,id=923b504a-09da-476b-a8c8-c6c76c5e8343,network=Network(565d9ab5-f943-4873-8a20-970fba448d46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923b504a-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.393 254824 DEBUG os_vif [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:ab:4e,bridge_name='br-int',has_traffic_filtering=True,id=923b504a-09da-476b-a8c8-c6c76c5e8343,network=Network(565d9ab5-f943-4873-8a20-970fba448d46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923b504a-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.395 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.395 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap923b504a-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.396 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.397 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.399 254824 INFO os_vif [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:ab:4e,bridge_name='br-int',has_traffic_filtering=True,id=923b504a-09da-476b-a8c8-c6c76c5e8343,network=Network(565d9ab5-f943-4873-8a20-970fba448d46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923b504a-09')#033[00m
Dec  6 05:13:42 np0005548915 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128-userdata-shm.mount: Deactivated successfully.
Dec  6 05:13:42 np0005548915 systemd[1]: var-lib-containers-storage-overlay-9928877ca56a0e966d3eea9b89794c7d2e32547dafcfc2eff997d385c12891b6-merged.mount: Deactivated successfully.
Dec  6 05:13:42 np0005548915 podman[274261]: 2025-12-06 10:13:42.427356851 +0000 UTC m=+0.088331447 container cleanup 46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  6 05:13:42 np0005548915 systemd[1]: libpod-conmon-46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128.scope: Deactivated successfully.
Dec  6 05:13:42 np0005548915 podman[274318]: 2025-12-06 10:13:42.492128809 +0000 UTC m=+0.043480775 container remove 46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  6 05:13:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.499 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[7e9a2bf4-687e-49b9-b738-4ee083d84a5a]: (4, ('Sat Dec  6 10:13:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46 (46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128)\n46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128\nSat Dec  6 10:13:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46 (46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128)\n46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.501 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[7914355c-28b1-4b4f-9c84-fc2bc545c569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.502 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap565d9ab5-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.504 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:42 np0005548915 kernel: tap565d9ab5-f0: left promiscuous mode
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.506 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.511 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3f4715-9fef-4b32-ad08-3a20c048bf5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.521 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.527 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[82e13ccd-93bc-47c6-9c22-8f80677fa5cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.528 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[36ae2f8e-ef07-4bc4-a31d-c321aa986449]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.544 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[27c3e4b9-947f-493a-abf2-4c6fc85f38da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436665, 'reachable_time': 39423, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274336, 'error': None, 'target': 'ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:42 np0005548915 systemd[1]: run-netns-ovnmeta\x2d565d9ab5\x2df943\x2d4873\x2d8a20\x2d970fba448d46.mount: Deactivated successfully.
Dec  6 05:13:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.547 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 05:13:42 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.548 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b37365-1fc2-4622-bb90-d0d01bae19c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.577 254824 DEBUG nova.compute.manager [req-60e225eb-6e90-496a-8cfe-85ae33dd5989 req-3d92d81b-1ad7-416c-b0ef-7769069a7bd4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-vif-unplugged-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.577 254824 DEBUG oslo_concurrency.lockutils [req-60e225eb-6e90-496a-8cfe-85ae33dd5989 req-3d92d81b-1ad7-416c-b0ef-7769069a7bd4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.578 254824 DEBUG oslo_concurrency.lockutils [req-60e225eb-6e90-496a-8cfe-85ae33dd5989 req-3d92d81b-1ad7-416c-b0ef-7769069a7bd4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.578 254824 DEBUG oslo_concurrency.lockutils [req-60e225eb-6e90-496a-8cfe-85ae33dd5989 req-3d92d81b-1ad7-416c-b0ef-7769069a7bd4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.578 254824 DEBUG nova.compute.manager [req-60e225eb-6e90-496a-8cfe-85ae33dd5989 req-3d92d81b-1ad7-416c-b0ef-7769069a7bd4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] No waiting events found dispatching network-vif-unplugged-923b504a-09da-476b-a8c8-c6c76c5e8343 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.579 254824 DEBUG nova.compute.manager [req-60e225eb-6e90-496a-8cfe-85ae33dd5989 req-3d92d81b-1ad7-416c-b0ef-7769069a7bd4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-vif-unplugged-923b504a-09da-476b-a8c8-c6c76c5e8343 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 05:13:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.777 254824 INFO nova.virt.libvirt.driver [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Deleting instance files /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa_del#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.778 254824 INFO nova.virt.libvirt.driver [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Deletion of /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa_del complete#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.837 254824 INFO nova.compute.manager [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.838 254824 DEBUG oslo.service.loopingcall [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.840 254824 DEBUG nova.compute.manager [-] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 05:13:42 np0005548915 nova_compute[254819]: 2025-12-06 10:13:42.840 254824 DEBUG nova.network.neutron [-] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 05:13:43 np0005548915 nova_compute[254819]: 2025-12-06 10:13:43.108 254824 DEBUG nova.network.neutron [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updated VIF entry in instance network info cache for port 923b504a-09da-476b-a8c8-c6c76c5e8343. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:13:43 np0005548915 nova_compute[254819]: 2025-12-06 10:13:43.109 254824 DEBUG nova.network.neutron [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updating instance_info_cache with network_info: [{"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:13:43 np0005548915 nova_compute[254819]: 2025-12-06 10:13:43.130 254824 DEBUG oslo_concurrency.lockutils [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:13:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:43 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:43 np0005548915 nova_compute[254819]: 2025-12-06 10:13:43.344 254824 DEBUG nova.network.neutron [-] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:13:43 np0005548915 nova_compute[254819]: 2025-12-06 10:13:43.374 254824 INFO nova.compute.manager [-] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Took 0.53 seconds to deallocate network for instance.#033[00m
Dec  6 05:13:43 np0005548915 nova_compute[254819]: 2025-12-06 10:13:43.415 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:13:43 np0005548915 nova_compute[254819]: 2025-12-06 10:13:43.416 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:13:43 np0005548915 nova_compute[254819]: 2025-12-06 10:13:43.458 254824 DEBUG oslo_concurrency.processutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:13:43 np0005548915 nova_compute[254819]: 2025-12-06 10:13:43.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:13:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:13:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:43.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:13:43 np0005548915 nova_compute[254819]: 2025-12-06 10:13:43.788 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:13:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:13:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:43.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:13:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:43 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 05:13:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:13:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2410722622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:13:43 np0005548915 nova_compute[254819]: 2025-12-06 10:13:43.950 254824 DEBUG oslo_concurrency.processutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:13:43 np0005548915 nova_compute[254819]: 2025-12-06 10:13:43.957 254824 DEBUG nova.compute.provider_tree [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:13:43 np0005548915 nova_compute[254819]: 2025-12-06 10:13:43.976 254824 DEBUG nova.scheduler.client.report [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.000 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.003 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.003 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.003 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.003 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.064 254824 INFO nova.scheduler.client.report [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance b735e225-377d-4f50-aae2-4bf5dd4eb9fa#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.082 254824 DEBUG nova.compute.manager [req-d04a51d0-a100-4724-a9c2-455b40950721 req-9675eff8-be61-4817-afca-8cf18fbc3746 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-vif-deleted-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.121 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:13:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v979: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Dec  6 05:13:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:13:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1801167825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.540 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:13:44 np0005548915 podman[274383]: 2025-12-06 10:13:44.547377491 +0000 UTC m=+0.167204235 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 05:13:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.703 254824 DEBUG nova.compute.manager [req-1a920767-4aec-4e28-9a68-7750c02bd978 req-19415190-1054-452f-bab7-be3f775e0a6e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.704 254824 DEBUG oslo_concurrency.lockutils [req-1a920767-4aec-4e28-9a68-7750c02bd978 req-19415190-1054-452f-bab7-be3f775e0a6e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.704 254824 DEBUG oslo_concurrency.lockutils [req-1a920767-4aec-4e28-9a68-7750c02bd978 req-19415190-1054-452f-bab7-be3f775e0a6e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.705 254824 DEBUG oslo_concurrency.lockutils [req-1a920767-4aec-4e28-9a68-7750c02bd978 req-19415190-1054-452f-bab7-be3f775e0a6e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.705 254824 DEBUG nova.compute.manager [req-1a920767-4aec-4e28-9a68-7750c02bd978 req-19415190-1054-452f-bab7-be3f775e0a6e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] No waiting events found dispatching network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.705 254824 WARNING nova.compute.manager [req-1a920767-4aec-4e28-9a68-7750c02bd978 req-19415190-1054-452f-bab7-be3f775e0a6e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received unexpected event network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.707 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.708 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4476MB free_disk=59.94276428222656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.708 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.708 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.764 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.764 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:13:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:44 np0005548915 nova_compute[254819]: 2025-12-06 10:13:44.793 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:13:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:13:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:13:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2469516180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:13:45 np0005548915 nova_compute[254819]: 2025-12-06 10:13:45.290 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:13:45 np0005548915 nova_compute[254819]: 2025-12-06 10:13:45.296 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:13:45 np0005548915 nova_compute[254819]: 2025-12-06 10:13:45.311 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:13:45 np0005548915 nova_compute[254819]: 2025-12-06 10:13:45.340 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:13:45 np0005548915 nova_compute[254819]: 2025-12-06 10:13:45.340 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:13:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:45.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:13:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:45.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:13:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  6 05:13:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3717895380' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 05:13:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  6 05:13:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3717895380' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 05:13:46 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:46.013 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:13:46 np0005548915 nova_compute[254819]: 2025-12-06 10:13:46.015 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:46 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:46.017 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 05:13:46 np0005548915 nova_compute[254819]: 2025-12-06 10:13:46.222 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v980: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 19 KiB/s wr, 25 op/s
Dec  6 05:13:46 np0005548915 nova_compute[254819]: 2025-12-06 10:13:46.340 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:13:46 np0005548915 nova_compute[254819]: 2025-12-06 10:13:46.341 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:13:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:46 np0005548915 nova_compute[254819]: 2025-12-06 10:13:46.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:13:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:47 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:13:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v981: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 21 KiB/s wr, 27 op/s
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:13:47 np0005548915 nova_compute[254819]: 2025-12-06 10:13:47.397 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:47 np0005548915 nova_compute[254819]: 2025-12-06 10:13:47.444 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:47 np0005548915 nova_compute[254819]: 2025-12-06 10:13:47.531 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:47.652Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:13:47 np0005548915 nova_compute[254819]: 2025-12-06 10:13:47.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:13:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:47.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:47.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:47 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:13:47 np0005548915 podman[274677]: 2025-12-06 10:13:47.965610015 +0000 UTC m=+0.067704019 container create 3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:13:48 np0005548915 systemd[1]: Started libpod-conmon-3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d.scope.
Dec  6 05:13:48 np0005548915 podman[274677]: 2025-12-06 10:13:47.933575761 +0000 UTC m=+0.035669855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:13:48 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:13:48 np0005548915 podman[274677]: 2025-12-06 10:13:48.080171639 +0000 UTC m=+0.182265673 container init 3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 05:13:48 np0005548915 podman[274677]: 2025-12-06 10:13:48.09318573 +0000 UTC m=+0.195279774 container start 3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  6 05:13:48 np0005548915 podman[274677]: 2025-12-06 10:13:48.097813016 +0000 UTC m=+0.199907120 container attach 3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:13:48 np0005548915 sweet_diffie[274693]: 167 167
Dec  6 05:13:48 np0005548915 systemd[1]: libpod-3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d.scope: Deactivated successfully.
Dec  6 05:13:48 np0005548915 podman[274677]: 2025-12-06 10:13:48.103734445 +0000 UTC m=+0.205828489 container died 3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  6 05:13:48 np0005548915 systemd[1]: var-lib-containers-storage-overlay-9618038b86bac6f984c3cb6df21a54adab478e3266442412b4a9802006b291aa-merged.mount: Deactivated successfully.
Dec  6 05:13:48 np0005548915 podman[274677]: 2025-12-06 10:13:48.159693556 +0000 UTC m=+0.261787600 container remove 3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  6 05:13:48 np0005548915 systemd[1]: libpod-conmon-3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d.scope: Deactivated successfully.
Dec  6 05:13:48 np0005548915 podman[274717]: 2025-12-06 10:13:48.342848451 +0000 UTC m=+0.059292491 container create 5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  6 05:13:48 np0005548915 podman[274717]: 2025-12-06 10:13:48.311807433 +0000 UTC m=+0.028251533 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:13:48 np0005548915 systemd[1]: Started libpod-conmon-5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec.scope.
Dec  6 05:13:48 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:13:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0b37dc18a29588bf234ed9600a530526778458c06080925252782ece7fb7f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:13:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0b37dc18a29588bf234ed9600a530526778458c06080925252782ece7fb7f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:13:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0b37dc18a29588bf234ed9600a530526778458c06080925252782ece7fb7f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:13:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0b37dc18a29588bf234ed9600a530526778458c06080925252782ece7fb7f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:13:48 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0b37dc18a29588bf234ed9600a530526778458c06080925252782ece7fb7f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:13:48 np0005548915 podman[274717]: 2025-12-06 10:13:48.462638536 +0000 UTC m=+0.179082556 container init 5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 05:13:48 np0005548915 podman[274717]: 2025-12-06 10:13:48.470836847 +0000 UTC m=+0.187280847 container start 5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:13:48 np0005548915 podman[274717]: 2025-12-06 10:13:48.47390613 +0000 UTC m=+0.190350200 container attach 5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:13:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:48 np0005548915 nova_compute[254819]: 2025-12-06 10:13:48.751 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:13:48 np0005548915 nova_compute[254819]: 2025-12-06 10:13:48.755 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:13:48 np0005548915 nova_compute[254819]: 2025-12-06 10:13:48.755 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:13:48 np0005548915 nova_compute[254819]: 2025-12-06 10:13:48.772 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 05:13:48 np0005548915 nova_compute[254819]: 2025-12-06 10:13:48.773 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:13:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:48 np0005548915 trusting_pasteur[274733]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:13:48 np0005548915 trusting_pasteur[274733]: --> All data devices are unavailable
Dec  6 05:13:48 np0005548915 systemd[1]: libpod-5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec.scope: Deactivated successfully.
Dec  6 05:13:48 np0005548915 podman[274717]: 2025-12-06 10:13:48.880861818 +0000 UTC m=+0.597305868 container died 5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:13:48 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ca0b37dc18a29588bf234ed9600a530526778458c06080925252782ece7fb7f4-merged.mount: Deactivated successfully.
Dec  6 05:13:48 np0005548915 podman[274717]: 2025-12-06 10:13:48.935914475 +0000 UTC m=+0.652358475 container remove 5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  6 05:13:48 np0005548915 systemd[1]: libpod-conmon-5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec.scope: Deactivated successfully.
Dec  6 05:13:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:49.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:13:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v982: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 9.7 KiB/s wr, 35 op/s
Dec  6 05:13:49 np0005548915 podman[274852]: 2025-12-06 10:13:49.684114456 +0000 UTC m=+0.058445549 container create 67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  6 05:13:49 np0005548915 systemd[1]: Started libpod-conmon-67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c.scope.
Dec  6 05:13:49 np0005548915 podman[274852]: 2025-12-06 10:13:49.65759522 +0000 UTC m=+0.031926313 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:13:49 np0005548915 nova_compute[254819]: 2025-12-06 10:13:49.764 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:13:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:49 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:13:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:13:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:49.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:13:49 np0005548915 podman[274852]: 2025-12-06 10:13:49.791822444 +0000 UTC m=+0.166153587 container init 67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:13:49 np0005548915 podman[274852]: 2025-12-06 10:13:49.800148649 +0000 UTC m=+0.174479732 container start 67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  6 05:13:49 np0005548915 podman[274852]: 2025-12-06 10:13:49.804540678 +0000 UTC m=+0.178871831 container attach 67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  6 05:13:49 np0005548915 happy_wilbur[274870]: 167 167
Dec  6 05:13:49 np0005548915 systemd[1]: libpod-67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c.scope: Deactivated successfully.
Dec  6 05:13:49 np0005548915 podman[274852]: 2025-12-06 10:13:49.807711403 +0000 UTC m=+0.182042496 container died 67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  6 05:13:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:49.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:49 np0005548915 systemd[1]: var-lib-containers-storage-overlay-eb393a135f6f9732f95d8e0a84bac745c67fd4062abeeaa8ed33b22c525b1abd-merged.mount: Deactivated successfully.
Dec  6 05:13:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:13:49 np0005548915 podman[274852]: 2025-12-06 10:13:49.853771357 +0000 UTC m=+0.228102450 container remove 67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:13:49 np0005548915 systemd[1]: libpod-conmon-67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c.scope: Deactivated successfully.
Dec  6 05:13:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:13:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:13:50 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:50.020 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:13:50 np0005548915 podman[274894]: 2025-12-06 10:13:50.114682552 +0000 UTC m=+0.059829557 container create d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  6 05:13:50 np0005548915 systemd[1]: Started libpod-conmon-d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef.scope.
Dec  6 05:13:50 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:13:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd2145fe832f46982822957549f7f616bd84a4005e6b62ff378091f1c19d69b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:13:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd2145fe832f46982822957549f7f616bd84a4005e6b62ff378091f1c19d69b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:13:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd2145fe832f46982822957549f7f616bd84a4005e6b62ff378091f1c19d69b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:13:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd2145fe832f46982822957549f7f616bd84a4005e6b62ff378091f1c19d69b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:13:50 np0005548915 podman[274894]: 2025-12-06 10:13:50.097770066 +0000 UTC m=+0.042917091 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:13:50 np0005548915 podman[274894]: 2025-12-06 10:13:50.199945974 +0000 UTC m=+0.145093039 container init d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  6 05:13:50 np0005548915 podman[274894]: 2025-12-06 10:13:50.214050515 +0000 UTC m=+0.159197570 container start d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:13:50 np0005548915 podman[274894]: 2025-12-06 10:13:50.221030214 +0000 UTC m=+0.166177319 container attach d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_easley, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  6 05:13:50 np0005548915 agitated_easley[274910]: {
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:    "1": [
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:        {
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:            "devices": [
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:                "/dev/loop3"
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:            ],
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:            "lv_name": "ceph_lv0",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:            "lv_size": "21470642176",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:            "name": "ceph_lv0",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:            "tags": {
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:                "ceph.cluster_name": "ceph",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:                "ceph.crush_device_class": "",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:                "ceph.encrypted": "0",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:                "ceph.osd_id": "1",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:                "ceph.type": "block",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:                "ceph.vdo": "0",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:                "ceph.with_tpm": "0"
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:            },
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:            "type": "block",
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:            "vg_name": "ceph_vg0"
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:        }
Dec  6 05:13:50 np0005548915 agitated_easley[274910]:    ]
Dec  6 05:13:50 np0005548915 agitated_easley[274910]: }
Dec  6 05:13:50 np0005548915 systemd[1]: libpod-d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef.scope: Deactivated successfully.
Dec  6 05:13:50 np0005548915 podman[274894]: 2025-12-06 10:13:50.565173495 +0000 UTC m=+0.510320550 container died d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:13:50 np0005548915 systemd[1]: var-lib-containers-storage-overlay-3fd2145fe832f46982822957549f7f616bd84a4005e6b62ff378091f1c19d69b-merged.mount: Deactivated successfully.
Dec  6 05:13:50 np0005548915 podman[274894]: 2025-12-06 10:13:50.610190412 +0000 UTC m=+0.555337407 container remove d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Dec  6 05:13:50 np0005548915 systemd[1]: libpod-conmon-d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef.scope: Deactivated successfully.
Dec  6 05:13:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101350 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 05:13:50 np0005548915 podman[274920]: 2025-12-06 10:13:50.672686489 +0000 UTC m=+0.068417589 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Dec  6 05:13:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:50 np0005548915 nova_compute[254819]: 2025-12-06 10:13:50.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:13:50 np0005548915 nova_compute[254819]: 2025-12-06 10:13:50.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:13:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:50] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec  6 05:13:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:50] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec  6 05:13:51 np0005548915 podman[275041]: 2025-12-06 10:13:51.153159531 +0000 UTC m=+0.037123603 container create e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  6 05:13:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:51 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:51 np0005548915 systemd[1]: Started libpod-conmon-e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a.scope.
Dec  6 05:13:51 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:13:51 np0005548915 nova_compute[254819]: 2025-12-06 10:13:51.224 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:51 np0005548915 podman[275041]: 2025-12-06 10:13:51.137573741 +0000 UTC m=+0.021537843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:13:51 np0005548915 podman[275041]: 2025-12-06 10:13:51.233590943 +0000 UTC m=+0.117555025 container init e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cerf, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:13:51 np0005548915 podman[275041]: 2025-12-06 10:13:51.24050737 +0000 UTC m=+0.124471442 container start e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cerf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 05:13:51 np0005548915 podman[275041]: 2025-12-06 10:13:51.243573863 +0000 UTC m=+0.127537925 container attach e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  6 05:13:51 np0005548915 great_cerf[275058]: 167 167
Dec  6 05:13:51 np0005548915 systemd[1]: libpod-e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a.scope: Deactivated successfully.
Dec  6 05:13:51 np0005548915 podman[275041]: 2025-12-06 10:13:51.246568464 +0000 UTC m=+0.130532536 container died e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  6 05:13:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v983: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 9.7 KiB/s wr, 35 op/s
Dec  6 05:13:51 np0005548915 systemd[1]: var-lib-containers-storage-overlay-3ec9be8e12e979d505d1fd1d89cc44bf6532af94bfec497a22d9f70ba1e59666-merged.mount: Deactivated successfully.
Dec  6 05:13:51 np0005548915 podman[275041]: 2025-12-06 10:13:51.285225417 +0000 UTC m=+0.169189489 container remove e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cerf, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 05:13:51 np0005548915 systemd[1]: libpod-conmon-e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a.scope: Deactivated successfully.
Dec  6 05:13:51 np0005548915 podman[275081]: 2025-12-06 10:13:51.433844121 +0000 UTC m=+0.041674707 container create 01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Dec  6 05:13:51 np0005548915 systemd[1]: Started libpod-conmon-01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf.scope.
Dec  6 05:13:51 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:13:51 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf804974685395eedaa06cea982bd65c5e9dade04a472ca9d719a92c293ac27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:13:51 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf804974685395eedaa06cea982bd65c5e9dade04a472ca9d719a92c293ac27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:13:51 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf804974685395eedaa06cea982bd65c5e9dade04a472ca9d719a92c293ac27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:13:51 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf804974685395eedaa06cea982bd65c5e9dade04a472ca9d719a92c293ac27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:13:51 np0005548915 podman[275081]: 2025-12-06 10:13:51.414339054 +0000 UTC m=+0.022169670 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:13:51 np0005548915 podman[275081]: 2025-12-06 10:13:51.509324478 +0000 UTC m=+0.117155084 container init 01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hawking, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:13:51 np0005548915 podman[275081]: 2025-12-06 10:13:51.516321507 +0000 UTC m=+0.124152093 container start 01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hawking, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:13:51 np0005548915 podman[275081]: 2025-12-06 10:13:51.520269363 +0000 UTC m=+0.128099979 container attach 01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  6 05:13:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:51.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:51.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:52 np0005548915 lvm[275173]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:13:52 np0005548915 lvm[275173]: VG ceph_vg0 finished
Dec  6 05:13:52 np0005548915 amazing_hawking[275097]: {}
Dec  6 05:13:52 np0005548915 systemd[1]: libpod-01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf.scope: Deactivated successfully.
Dec  6 05:13:52 np0005548915 podman[275081]: 2025-12-06 10:13:52.229793142 +0000 UTC m=+0.837623758 container died 01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hawking, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:13:52 np0005548915 systemd[1]: libpod-01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf.scope: Consumed 1.106s CPU time.
Dec  6 05:13:52 np0005548915 systemd[1]: var-lib-containers-storage-overlay-2bf804974685395eedaa06cea982bd65c5e9dade04a472ca9d719a92c293ac27-merged.mount: Deactivated successfully.
Dec  6 05:13:52 np0005548915 podman[275081]: 2025-12-06 10:13:52.277624993 +0000 UTC m=+0.885455589 container remove 01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hawking, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:13:52 np0005548915 systemd[1]: libpod-conmon-01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf.scope: Deactivated successfully.
Dec  6 05:13:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:13:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:13:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:52 np0005548915 nova_compute[254819]: 2025-12-06 10:13:52.399 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 05:13:52 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:52 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:13:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:53 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v984: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 10 KiB/s wr, 36 op/s
Dec  6 05:13:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:53.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:53.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:13:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:13:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:13:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:13:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:13:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:13:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:13:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:13:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:54.244 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:13:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:54.245 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:13:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:13:54.245 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:13:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:13:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:55 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101355 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 05:13:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.7 KiB/s wr, 10 op/s
Dec  6 05:13:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:55.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:55.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:56 np0005548915 nova_compute[254819]: 2025-12-06 10:13:56.226 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:57 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:57 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.7 KiB/s wr, 10 op/s
Dec  6 05:13:57 np0005548915 nova_compute[254819]: 2025-12-06 10:13:57.359 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765016022.3577473, b735e225-377d-4f50-aae2-4bf5dd4eb9fa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:13:57 np0005548915 nova_compute[254819]: 2025-12-06 10:13:57.360 254824 INFO nova.compute.manager [-] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] VM Stopped (Lifecycle Event)#033[00m
Dec  6 05:13:57 np0005548915 nova_compute[254819]: 2025-12-06 10:13:57.387 254824 DEBUG nova.compute.manager [None req-68e9d75a-b2d2-4ca4-a44e-3032ba699fcd - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:13:57 np0005548915 nova_compute[254819]: 2025-12-06 10:13:57.436 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:13:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:57.654Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:13:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:57.654Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:13:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:57.655Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:13:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:57.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:57.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:59.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:13:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:59 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00012a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:13:59 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 1.6 KiB/s wr, 10 op/s
Dec  6 05:13:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=cleanup t=2025-12-06T10:13:59.589508379Z level=info msg="Completed cleanup jobs" duration=25.458398ms
Dec  6 05:13:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafana.update.checker t=2025-12-06T10:13:59.714327438Z level=info msg="Update check succeeded" duration=47.584395ms
Dec  6 05:13:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=plugins.update.checker t=2025-12-06T10:13:59.721004899Z level=info msg="Update check succeeded" duration=93.902076ms
Dec  6 05:13:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:13:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:59.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:13:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:13:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:13:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:59.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:13:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:14:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:00] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  6 05:14:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:00] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  6 05:14:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:01 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:01 np0005548915 nova_compute[254819]: 2025-12-06 10:14:01.228 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:01 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Dec  6 05:14:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:01.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:01.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:02 np0005548915 nova_compute[254819]: 2025-12-06 10:14:02.440 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:03 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:03 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Dec  6 05:14:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:03.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:03.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:04 np0005548915 nova_compute[254819]: 2025-12-06 10:14:04.219 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:14:04 np0005548915 nova_compute[254819]: 2025-12-06 10:14:04.219 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:14:04 np0005548915 nova_compute[254819]: 2025-12-06 10:14:04.241 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 05:14:04 np0005548915 nova_compute[254819]: 2025-12-06 10:14:04.333 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:14:04 np0005548915 nova_compute[254819]: 2025-12-06 10:14:04.334 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:14:04 np0005548915 nova_compute[254819]: 2025-12-06 10:14:04.341 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 05:14:04 np0005548915 nova_compute[254819]: 2025-12-06 10:14:04.342 254824 INFO nova.compute.claims [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  6 05:14:04 np0005548915 nova_compute[254819]: 2025-12-06 10:14:04.449 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:14:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:14:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:14:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1452760283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:14:04 np0005548915 nova_compute[254819]: 2025-12-06 10:14:04.914 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:14:04 np0005548915 nova_compute[254819]: 2025-12-06 10:14:04.923 254824 DEBUG nova.compute.provider_tree [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:14:04 np0005548915 nova_compute[254819]: 2025-12-06 10:14:04.942 254824 DEBUG nova.scheduler.client.report [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:14:04 np0005548915 nova_compute[254819]: 2025-12-06 10:14:04.971 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:14:04 np0005548915 nova_compute[254819]: 2025-12-06 10:14:04.972 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.029 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.029 254824 DEBUG nova.network.neutron [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.048 254824 INFO nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.074 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 05:14:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:05 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.192 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.194 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.195 254824 INFO nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Creating image(s)#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.230 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:14:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.265 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.296 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.300 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.362 254824 DEBUG nova.policy [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.376 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.376 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.377 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.377 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.407 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.413 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:14:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:05.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.793 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:14:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:05.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.886 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.979 254824 DEBUG nova.objects.instance [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 1a910dd4-6c75-4618-8b34-925e2d30f8b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.992 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.993 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Ensure instance console log exists: /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.993 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.993 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:14:05 np0005548915 nova_compute[254819]: 2025-12-06 10:14:05.994 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:14:06 np0005548915 nova_compute[254819]: 2025-12-06 10:14:06.230 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:06 np0005548915 nova_compute[254819]: 2025-12-06 10:14:06.466 254824 DEBUG nova.network.neutron [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Successfully created port: 6848cb43-8472-434b-a796-f96c3ce423e2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 05:14:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:07 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  6 05:14:07 np0005548915 nova_compute[254819]: 2025-12-06 10:14:07.442 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:07 np0005548915 nova_compute[254819]: 2025-12-06 10:14:07.538 254824 DEBUG nova.network.neutron [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Successfully updated port: 6848cb43-8472-434b-a796-f96c3ce423e2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 05:14:07 np0005548915 nova_compute[254819]: 2025-12-06 10:14:07.567 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:14:07 np0005548915 nova_compute[254819]: 2025-12-06 10:14:07.568 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:14:07 np0005548915 nova_compute[254819]: 2025-12-06 10:14:07.568 254824 DEBUG nova.network.neutron [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 05:14:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:07.655Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:14:07 np0005548915 nova_compute[254819]: 2025-12-06 10:14:07.678 254824 DEBUG nova.compute.manager [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:14:07 np0005548915 nova_compute[254819]: 2025-12-06 10:14:07.679 254824 DEBUG nova.compute.manager [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing instance network info cache due to event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:14:07 np0005548915 nova_compute[254819]: 2025-12-06 10:14:07.679 254824 DEBUG oslo_concurrency.lockutils [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:14:07 np0005548915 nova_compute[254819]: 2025-12-06 10:14:07.717 254824 DEBUG nova.network.neutron [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 05:14:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:07.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:07.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.555 254824 DEBUG nova.network.neutron [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.589 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.590 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Instance network_info: |[{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.591 254824 DEBUG oslo_concurrency.lockutils [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.591 254824 DEBUG nova.network.neutron [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.594 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Start _get_guest_xml network_info=[{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.599 254824 WARNING nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.604 254824 DEBUG nova.virt.libvirt.host [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.605 254824 DEBUG nova.virt.libvirt.host [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.614 254824 DEBUG nova.virt.libvirt.host [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.615 254824 DEBUG nova.virt.libvirt.host [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.615 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.616 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.617 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.617 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.617 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.618 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.618 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.618 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.619 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.619 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.619 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.619 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 05:14:08 np0005548915 nova_compute[254819]: 2025-12-06 10:14:08.624 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:14:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:14:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:14:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:09.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:14:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:09.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:14:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:14:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2607869369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.065 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.097 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.102 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:14:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:09 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:09 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v992: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:14:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:14:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541364610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.582 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.585 254824 DEBUG nova.virt.libvirt.vif [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-697052485',display_name='tempest-TestNetworkBasicOps-server-697052485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-697052485',id=11,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAuhYKdKN9EDS1I/XZyg4WhafMZhuRCMz5uAEJQd26Rxd5WVAmZGHQIQO5WPFhGxsnRcRB0qgDKQ8dvJeA5b8MtdKHCXg8WKkLdZila9zexViJRw9mwokE7iqisT3z+5Ig==',key_name='tempest-TestNetworkBasicOps-1780141244',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-9i00mr91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:14:05Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=1a910dd4-6c75-4618-8b34-925e2d30f8b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.586 254824 DEBUG nova.network.os_vif_util [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.587 254824 DEBUG nova.network.os_vif_util [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:47:c3,bridge_name='br-int',has_traffic_filtering=True,id=6848cb43-8472-434b-a796-f96c3ce423e2,network=Network(ef8aaff1-03b0-4544-89c9-035c25f01e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6848cb43-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.589 254824 DEBUG nova.objects.instance [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 1a910dd4-6c75-4618-8b34-925e2d30f8b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.607 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] End _get_guest_xml xml=<domain type="kvm">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  <uuid>1a910dd4-6c75-4618-8b34-925e2d30f8b9</uuid>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  <name>instance-0000000b</name>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  <memory>131072</memory>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  <vcpu>1</vcpu>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <nova:name>tempest-TestNetworkBasicOps-server-697052485</nova:name>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <nova:creationTime>2025-12-06 10:14:08</nova:creationTime>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <nova:flavor name="m1.nano">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <nova:memory>128</nova:memory>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <nova:disk>1</nova:disk>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <nova:swap>0</nova:swap>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <nova:vcpus>1</nova:vcpus>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      </nova:flavor>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <nova:owner>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      </nova:owner>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <nova:ports>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <nova:port uuid="6848cb43-8472-434b-a796-f96c3ce423e2">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        </nova:port>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      </nova:ports>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    </nova:instance>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  <sysinfo type="smbios">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <entry name="manufacturer">RDO</entry>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <entry name="product">OpenStack Compute</entry>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <entry name="serial">1a910dd4-6c75-4618-8b34-925e2d30f8b9</entry>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <entry name="uuid">1a910dd4-6c75-4618-8b34-925e2d30f8b9</entry>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <entry name="family">Virtual Machine</entry>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <boot dev="hd"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <smbios mode="sysinfo"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <vmcoreinfo/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  <clock offset="utc">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <timer name="hpet" present="no"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  <cpu mode="host-model" match="exact">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <disk type="network" device="disk">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <target dev="vda" bus="virtio"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <disk type="network" device="cdrom">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk.config">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <target dev="sda" bus="sata"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <interface type="ethernet">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <mac address="fa:16:3e:87:47:c3"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <mtu size="1442"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <target dev="tap6848cb43-84"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <serial type="pty">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <log file="/var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/console.log" append="off"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <input type="tablet" bus="usb"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <rng model="virtio">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <backend model="random">/dev/urandom</backend>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <controller type="usb" index="0"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    <memballoon model="virtio">
Dec  6 05:14:09 np0005548915 nova_compute[254819]:      <stats period="10"/>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:14:09 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:14:09 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:14:09 np0005548915 nova_compute[254819]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.609 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Preparing to wait for external event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.609 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.610 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.610 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.611 254824 DEBUG nova.virt.libvirt.vif [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-697052485',display_name='tempest-TestNetworkBasicOps-server-697052485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-697052485',id=11,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAuhYKdKN9EDS1I/XZyg4WhafMZhuRCMz5uAEJQd26Rxd5WVAmZGHQIQO5WPFhGxsnRcRB0qgDKQ8dvJeA5b8MtdKHCXg8WKkLdZila9zexViJRw9mwokE7iqisT3z+5Ig==',key_name='tempest-TestNetworkBasicOps-1780141244',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-9i00mr91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:14:05Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=1a910dd4-6c75-4618-8b34-925e2d30f8b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.611 254824 DEBUG nova.network.os_vif_util [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.612 254824 DEBUG nova.network.os_vif_util [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:47:c3,bridge_name='br-int',has_traffic_filtering=True,id=6848cb43-8472-434b-a796-f96c3ce423e2,network=Network(ef8aaff1-03b0-4544-89c9-035c25f01e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6848cb43-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.612 254824 DEBUG os_vif [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:47:c3,bridge_name='br-int',has_traffic_filtering=True,id=6848cb43-8472-434b-a796-f96c3ce423e2,network=Network(ef8aaff1-03b0-4544-89c9-035c25f01e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6848cb43-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.613 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.613 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.614 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.618 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.619 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6848cb43-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.619 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6848cb43-84, col_values=(('external_ids', {'iface-id': '6848cb43-8472-434b-a796-f96c3ce423e2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:47:c3', 'vm-uuid': '1a910dd4-6c75-4618-8b34-925e2d30f8b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.621 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:09 np0005548915 NetworkManager[48882]: <info>  [1765016049.6221] manager: (tap6848cb43-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.623 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.632 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.633 254824 INFO os_vif [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:47:c3,bridge_name='br-int',has_traffic_filtering=True,id=6848cb43-8472-434b-a796-f96c3ce423e2,network=Network(ef8aaff1-03b0-4544-89c9-035c25f01e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6848cb43-84')#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.689 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.689 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.690 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:87:47:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.690 254824 INFO nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Using config drive#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.715 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:14:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:09.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.836 254824 DEBUG nova.network.neutron [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updated VIF entry in instance network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.836 254824 DEBUG nova.network.neutron [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:14:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:14:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:09.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:09 np0005548915 nova_compute[254819]: 2025-12-06 10:14:09.853 254824 DEBUG oslo_concurrency.lockutils [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:14:10 np0005548915 nova_compute[254819]: 2025-12-06 10:14:10.008 254824 INFO nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Creating config drive at /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/disk.config#033[00m
Dec  6 05:14:10 np0005548915 nova_compute[254819]: 2025-12-06 10:14:10.013 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpas_8k0d_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:14:10 np0005548915 nova_compute[254819]: 2025-12-06 10:14:10.141 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpas_8k0d_" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:14:10 np0005548915 nova_compute[254819]: 2025-12-06 10:14:10.167 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:14:10 np0005548915 nova_compute[254819]: 2025-12-06 10:14:10.171 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/disk.config 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:14:10 np0005548915 nova_compute[254819]: 2025-12-06 10:14:10.343 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/disk.config 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:14:10 np0005548915 nova_compute[254819]: 2025-12-06 10:14:10.346 254824 INFO nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Deleting local config drive /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/disk.config because it was imported into RBD.#033[00m
Dec  6 05:14:10 np0005548915 kernel: tap6848cb43-84: entered promiscuous mode
Dec  6 05:14:10 np0005548915 NetworkManager[48882]: <info>  [1765016050.4227] manager: (tap6848cb43-84): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Dec  6 05:14:10 np0005548915 ovn_controller[152417]: 2025-12-06T10:14:10Z|00119|binding|INFO|Claiming lport 6848cb43-8472-434b-a796-f96c3ce423e2 for this chassis.
Dec  6 05:14:10 np0005548915 ovn_controller[152417]: 2025-12-06T10:14:10Z|00120|binding|INFO|6848cb43-8472-434b-a796-f96c3ce423e2: Claiming fa:16:3e:87:47:c3 10.100.0.10
Dec  6 05:14:10 np0005548915 nova_compute[254819]: 2025-12-06 10:14:10.426 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:10 np0005548915 nova_compute[254819]: 2025-12-06 10:14:10.433 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.449 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:47:c3 10.100.0.10'], port_security=['fa:16:3e:87:47:c3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1a910dd4-6c75-4618-8b34-925e2d30f8b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1fd56fd-eb5a-422e-9da4-fb641a59e1a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1a37e6e-1014-49d4-9543-ee1567988851, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=6848cb43-8472-434b-a796-f96c3ce423e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.450 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 6848cb43-8472-434b-a796-f96c3ce423e2 in datapath ef8aaff1-03b0-4544-89c9-035c25f01e5c bound to our chassis#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.451 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef8aaff1-03b0-4544-89c9-035c25f01e5c#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.466 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9493e419-661b-4d97-b540-4a09d35c4311]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.466 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef8aaff1-01 in ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.468 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef8aaff1-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.469 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f40bac-9537-4fb8-8573-ae1ea852c9e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.470 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[09875ad6-fcea-45af-b377-d84bb1fe2579]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 systemd-machined[216202]: New machine qemu-8-instance-0000000b.
Dec  6 05:14:10 np0005548915 systemd[1]: Started Virtual Machine qemu-8-instance-0000000b.
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.489 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[acf86e69-4ba0-433b-9e83-beb95c085466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 systemd-udevd[275593]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.517 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[92136581-6474-4dd0-8b96-f0260e058950]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 NetworkManager[48882]: <info>  [1765016050.5202] device (tap6848cb43-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 05:14:10 np0005548915 NetworkManager[48882]: <info>  [1765016050.5209] device (tap6848cb43-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 05:14:10 np0005548915 ovn_controller[152417]: 2025-12-06T10:14:10Z|00121|binding|INFO|Setting lport 6848cb43-8472-434b-a796-f96c3ce423e2 ovn-installed in OVS
Dec  6 05:14:10 np0005548915 ovn_controller[152417]: 2025-12-06T10:14:10Z|00122|binding|INFO|Setting lport 6848cb43-8472-434b-a796-f96c3ce423e2 up in Southbound
Dec  6 05:14:10 np0005548915 nova_compute[254819]: 2025-12-06 10:14:10.541 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.546 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[a960cd7f-27b5-4e26-8e8b-d1e94f7b3954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 NetworkManager[48882]: <info>  [1765016050.5529] manager: (tapef8aaff1-00): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.552 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd098f1-7921-453b-bd26-b969af36c006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 systemd-udevd[275601]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.578 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[5c896a6d-7829-4bf9-86f3-67ab8be74bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.581 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[56b931bf-e0a2-4297-a6ab-14cd46980def]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 podman[275584]: 2025-12-06 10:14:10.586178034 +0000 UTC m=+0.091274855 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:14:10 np0005548915 NetworkManager[48882]: <info>  [1765016050.6024] device (tapef8aaff1-00): carrier: link connected
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.607 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[18e7d73f-319c-4ac0-b018-1cd0f405b7ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.623 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[72e1323f-5a1d-4c81-a5f3-04b1250d946c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef8aaff1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:e2:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442108, 'reachable_time': 27672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275634, 'error': None, 'target': 'ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.636 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[49a896bb-ffc9-466a-adca-0648f33a742e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee6:e290'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442108, 'tstamp': 442108}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275636, 'error': None, 'target': 'ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.651 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4b4d0b-6b7e-42ea-99bb-47c39ede224a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef8aaff1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:e2:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442108, 'reachable_time': 27672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275637, 'error': None, 'target': 'ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.679 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d8890c-27bc-4234-89dc-eb2a385149ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.732 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[d562be83-7ace-41fd-80ab-1da7e4b2f093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.734 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef8aaff1-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.734 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.735 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef8aaff1-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:14:10 np0005548915 NetworkManager[48882]: <info>  [1765016050.7371] manager: (tapef8aaff1-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Dec  6 05:14:10 np0005548915 kernel: tapef8aaff1-00: entered promiscuous mode
Dec  6 05:14:10 np0005548915 nova_compute[254819]: 2025-12-06 10:14:10.736 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.741 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef8aaff1-00, col_values=(('external_ids', {'iface-id': '6e1dcf71-e1ba-45b9-bb6f-63d6dce249f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:14:10 np0005548915 ovn_controller[152417]: 2025-12-06T10:14:10Z|00123|binding|INFO|Releasing lport 6e1dcf71-e1ba-45b9-bb6f-63d6dce249f2 from this chassis (sb_readonly=0)
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.745 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef8aaff1-03b0-4544-89c9-035c25f01e5c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef8aaff1-03b0-4544-89c9-035c25f01e5c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.746 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[d993bfdb-e93d-4a3d-8c6d-58d6007c3d12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.747 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: global
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    log         /dev/log local0 debug
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    log-tag     haproxy-metadata-proxy-ef8aaff1-03b0-4544-89c9-035c25f01e5c
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    user        root
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    group       root
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    maxconn     1024
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    pidfile     /var/lib/neutron/external/pids/ef8aaff1-03b0-4544-89c9-035c25f01e5c.pid.haproxy
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    daemon
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: defaults
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    log global
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    mode http
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    option httplog
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    option dontlognull
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    option http-server-close
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    option forwardfor
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    retries                 3
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    timeout http-request    30s
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    timeout connect         30s
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    timeout client          32s
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    timeout server          32s
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    timeout http-keep-alive 30s
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: listen listener
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    bind 169.254.169.254:80
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]:    http-request add-header X-OVN-Network-ID ef8aaff1-03b0-4544-89c9-035c25f01e5c
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 05:14:10 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.747 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'env', 'PROCESS_TAG=haproxy-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef8aaff1-03b0-4544-89c9-035c25f01e5c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 05:14:10 np0005548915 nova_compute[254819]: 2025-12-06 10:14:10.757 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:10] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  6 05:14:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:10] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.048 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765016051.048324, 1a910dd4-6c75-4618-8b34-925e2d30f8b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.049 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] VM Started (Lifecycle Event)#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.080 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.084 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765016051.0491526, 1a910dd4-6c75-4618-8b34-925e2d30f8b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.084 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] VM Paused (Lifecycle Event)#033[00m
Dec  6 05:14:11 np0005548915 podman[275711]: 2025-12-06 10:14:11.09045419 +0000 UTC m=+0.046762913 container create 33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.104 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.107 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.128 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:14:11 np0005548915 systemd[1]: Started libpod-conmon-33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d.scope.
Dec  6 05:14:11 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:14:11 np0005548915 podman[275711]: 2025-12-06 10:14:11.067635804 +0000 UTC m=+0.023944527 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  6 05:14:11 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b93e3df8fb7a26445c0dd9f79f250dbd57ab6146ffb6d9a8c76505e995ddf4d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 05:14:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:11 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:11 np0005548915 podman[275711]: 2025-12-06 10:14:11.183702698 +0000 UTC m=+0.140011411 container init 33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:14:11 np0005548915 podman[275711]: 2025-12-06 10:14:11.188714823 +0000 UTC m=+0.145023526 container start 33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  6 05:14:11 np0005548915 neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c[275726]: [NOTICE]   (275730) : New worker (275732) forked
Dec  6 05:14:11 np0005548915 neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c[275726]: [NOTICE]   (275730) : Loading success.
Dec  6 05:14:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v993: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.271 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.397 254824 DEBUG nova.compute.manager [req-9b78dd4a-169a-4ed6-95b1-e6a6ad3c4274 req-0952949f-991f-45b3-a341-258eb4dadc48 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.397 254824 DEBUG oslo_concurrency.lockutils [req-9b78dd4a-169a-4ed6-95b1-e6a6ad3c4274 req-0952949f-991f-45b3-a341-258eb4dadc48 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.397 254824 DEBUG oslo_concurrency.lockutils [req-9b78dd4a-169a-4ed6-95b1-e6a6ad3c4274 req-0952949f-991f-45b3-a341-258eb4dadc48 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.398 254824 DEBUG oslo_concurrency.lockutils [req-9b78dd4a-169a-4ed6-95b1-e6a6ad3c4274 req-0952949f-991f-45b3-a341-258eb4dadc48 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.398 254824 DEBUG nova.compute.manager [req-9b78dd4a-169a-4ed6-95b1-e6a6ad3c4274 req-0952949f-991f-45b3-a341-258eb4dadc48 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Processing event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.399 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.403 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765016051.4029374, 1a910dd4-6c75-4618-8b34-925e2d30f8b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.403 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] VM Resumed (Lifecycle Event)#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.405 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.408 254824 INFO nova.virt.libvirt.driver [-] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Instance spawned successfully.#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.408 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.433 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.437 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.437 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.437 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.438 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.438 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.439 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.443 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.479 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.505 254824 INFO nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Took 6.31 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.506 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.571 254824 INFO nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Took 7.27 seconds to build instance.#033[00m
Dec  6 05:14:11 np0005548915 nova_compute[254819]: 2025-12-06 10:14:11.587 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:14:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:11.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:11.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:13 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v994: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec  6 05:14:13 np0005548915 nova_compute[254819]: 2025-12-06 10:14:13.513 254824 DEBUG nova.compute.manager [req-a0f2fe5e-3c63-4d6e-bdb3-c61f698da463 req-afc37890-6cf7-4cab-bf90-726093e26326 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:14:13 np0005548915 nova_compute[254819]: 2025-12-06 10:14:13.513 254824 DEBUG oslo_concurrency.lockutils [req-a0f2fe5e-3c63-4d6e-bdb3-c61f698da463 req-afc37890-6cf7-4cab-bf90-726093e26326 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:14:13 np0005548915 nova_compute[254819]: 2025-12-06 10:14:13.514 254824 DEBUG oslo_concurrency.lockutils [req-a0f2fe5e-3c63-4d6e-bdb3-c61f698da463 req-afc37890-6cf7-4cab-bf90-726093e26326 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:14:13 np0005548915 nova_compute[254819]: 2025-12-06 10:14:13.514 254824 DEBUG oslo_concurrency.lockutils [req-a0f2fe5e-3c63-4d6e-bdb3-c61f698da463 req-afc37890-6cf7-4cab-bf90-726093e26326 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:14:13 np0005548915 nova_compute[254819]: 2025-12-06 10:14:13.514 254824 DEBUG nova.compute.manager [req-a0f2fe5e-3c63-4d6e-bdb3-c61f698da463 req-afc37890-6cf7-4cab-bf90-726093e26326 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] No waiting events found dispatching network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:14:13 np0005548915 nova_compute[254819]: 2025-12-06 10:14:13.515 254824 WARNING nova.compute.manager [req-a0f2fe5e-3c63-4d6e-bdb3-c61f698da463 req-afc37890-6cf7-4cab-bf90-726093e26326 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received unexpected event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 for instance with vm_state active and task_state None.#033[00m
Dec  6 05:14:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:13.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:13.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:14 np0005548915 NetworkManager[48882]: <info>  [1765016054.3999] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Dec  6 05:14:14 np0005548915 ovn_controller[152417]: 2025-12-06T10:14:14Z|00124|binding|INFO|Releasing lport 6e1dcf71-e1ba-45b9-bb6f-63d6dce249f2 from this chassis (sb_readonly=0)
Dec  6 05:14:14 np0005548915 nova_compute[254819]: 2025-12-06 10:14:14.397 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:14 np0005548915 NetworkManager[48882]: <info>  [1765016054.4029] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Dec  6 05:14:14 np0005548915 nova_compute[254819]: 2025-12-06 10:14:14.452 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:14 np0005548915 ovn_controller[152417]: 2025-12-06T10:14:14Z|00125|binding|INFO|Releasing lport 6e1dcf71-e1ba-45b9-bb6f-63d6dce249f2 from this chassis (sb_readonly=0)
Dec  6 05:14:14 np0005548915 nova_compute[254819]: 2025-12-06 10:14:14.460 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:14 np0005548915 nova_compute[254819]: 2025-12-06 10:14:14.622 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:14 np0005548915 nova_compute[254819]: 2025-12-06 10:14:14.749 254824 DEBUG nova.compute.manager [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:14:14 np0005548915 nova_compute[254819]: 2025-12-06 10:14:14.749 254824 DEBUG nova.compute.manager [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing instance network info cache due to event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:14:14 np0005548915 nova_compute[254819]: 2025-12-06 10:14:14.750 254824 DEBUG oslo_concurrency.lockutils [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:14:14 np0005548915 nova_compute[254819]: 2025-12-06 10:14:14.750 254824 DEBUG oslo_concurrency.lockutils [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:14:14 np0005548915 nova_compute[254819]: 2025-12-06 10:14:14.750 254824 DEBUG nova.network.neutron [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:14:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:14:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:15 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v995: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec  6 05:14:15 np0005548915 podman[275771]: 2025-12-06 10:14:15.503051362 +0000 UTC m=+0.125780526 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 05:14:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:15.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:15.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:16 np0005548915 nova_compute[254819]: 2025-12-06 10:14:16.237 254824 DEBUG nova.network.neutron [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updated VIF entry in instance network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:14:16 np0005548915 nova_compute[254819]: 2025-12-06 10:14:16.237 254824 DEBUG nova.network.neutron [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:14:16 np0005548915 nova_compute[254819]: 2025-12-06 10:14:16.257 254824 DEBUG oslo_concurrency.lockutils [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:14:16 np0005548915 nova_compute[254819]: 2025-12-06 10:14:16.273 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101416 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:14:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:17 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:17 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v996: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec  6 05:14:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:17.656Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:14:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:17.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:17.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.555714) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016058555792, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1326, "num_deletes": 503, "total_data_size": 1887795, "memory_usage": 1910304, "flush_reason": "Manual Compaction"}
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016058571623, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1843544, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28274, "largest_seqno": 29599, "table_properties": {"data_size": 1837641, "index_size": 2723, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16377, "raw_average_key_size": 19, "raw_value_size": 1823788, "raw_average_value_size": 2202, "num_data_blocks": 117, "num_entries": 828, "num_filter_entries": 828, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015970, "oldest_key_time": 1765015970, "file_creation_time": 1765016058, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 15985 microseconds, and 6072 cpu microseconds.
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.571718) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1843544 bytes OK
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.571759) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.574857) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.574875) EVENT_LOG_v1 {"time_micros": 1765016058574869, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.574894) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1880854, prev total WAL file size 1880854, number of live WAL files 2.
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.575908) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1800KB)], [62(16MB)]
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016058575945, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 19368432, "oldest_snapshot_seqno": -1}
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5830 keys, 13151138 bytes, temperature: kUnknown
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016058660535, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 13151138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13113623, "index_size": 21853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 150706, "raw_average_key_size": 25, "raw_value_size": 13009507, "raw_average_value_size": 2231, "num_data_blocks": 875, "num_entries": 5830, "num_filter_entries": 5830, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765016058, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.660792) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 13151138 bytes
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.662197) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.7 rd, 155.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 16.7 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(17.6) write-amplify(7.1) OK, records in: 6855, records dropped: 1025 output_compression: NoCompression
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.662213) EVENT_LOG_v1 {"time_micros": 1765016058662205, "job": 34, "event": "compaction_finished", "compaction_time_micros": 84703, "compaction_time_cpu_micros": 26381, "output_level": 6, "num_output_files": 1, "total_output_size": 13151138, "num_input_records": 6855, "num_output_records": 5830, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016058662576, "job": 34, "event": "table_file_deletion", "file_number": 64}
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016058665150, "job": 34, "event": "table_file_deletion", "file_number": 62}
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.575810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.665228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.665235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.665238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.665241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:14:18 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.665244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:14:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:19.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:14:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:19.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:14:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:19 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v997: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  6 05:14:19 np0005548915 nova_compute[254819]: 2025-12-06 10:14:19.670 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:19.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:14:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:19.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:20] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  6 05:14:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:20] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  6 05:14:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:21 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:21 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v998: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  6 05:14:21 np0005548915 nova_compute[254819]: 2025-12-06 10:14:21.275 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:21 np0005548915 podman[275804]: 2025-12-06 10:14:21.427621269 +0000 UTC m=+0.056445135 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  6 05:14:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:21.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:21.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003f70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:23 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:23 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v999: 337 pgs: 337 active+clean; 134 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  6 05:14:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:23.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:23.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:14:23
Dec  6 05:14:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:14:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:14:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'volumes', '.nfs', '.mgr', 'default.rgw.control', 'vms', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'backups']
Dec  6 05:14:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:14:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:14:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000694346938692453 of space, bias 1.0, pg target 0.2083040816077359 quantized to 32 (current 32)
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:14:24 np0005548915 ovn_controller[152417]: 2025-12-06T10:14:24Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:47:c3 10.100.0.10
Dec  6 05:14:24 np0005548915 ovn_controller[152417]: 2025-12-06T10:14:24Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:47:c3 10.100.0.10
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:14:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:14:24 np0005548915 nova_compute[254819]: 2025-12-06 10:14:24.672 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003f90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:14:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:25 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1000: 337 pgs: 337 active+clean; 134 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec  6 05:14:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:25.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  6 05:14:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:25.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:26 np0005548915 nova_compute[254819]: 2025-12-06 10:14:26.319 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:27 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:27 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1001: 337 pgs: 337 active+clean; 134 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec  6 05:14:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:27.657Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:14:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:27.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:14:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:27.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:14:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  6 05:14:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  6 05:14:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:29.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:14:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:29 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:29 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1002: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 683 KiB/s rd, 3.9 MiB/s wr, 108 op/s
Dec  6 05:14:29 np0005548915 nova_compute[254819]: 2025-12-06 10:14:29.675 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:29.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:14:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:29.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:30] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Dec  6 05:14:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:30] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Dec  6 05:14:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:31 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1003: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Dec  6 05:14:31 np0005548915 nova_compute[254819]: 2025-12-06 10:14:31.376 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:31.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  6 05:14:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:31.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:33 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:33 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1004: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Dec  6 05:14:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:33.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:33.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:34 np0005548915 nova_compute[254819]: 2025-12-06 10:14:34.718 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:14:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:35 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:35 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1005: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Dec  6 05:14:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:35.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:35.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:36 np0005548915 nova_compute[254819]: 2025-12-06 10:14:36.378 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:37 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:37 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1006: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Dec  6 05:14:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:37.658Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:14:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:14:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:37.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:14:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:37.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101438 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  6 05:14:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:14:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:14:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:39.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:14:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:39.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:14:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:39 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:39 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1007: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Dec  6 05:14:39 np0005548915 nova_compute[254819]: 2025-12-06 10:14:39.720 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:14:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:39.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:40] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec  6 05:14:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:40] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec  6 05:14:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:41 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:41 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1008: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 70 op/s
Dec  6 05:14:41 np0005548915 nova_compute[254819]: 2025-12-06 10:14:41.382 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:41 np0005548915 podman[275870]: 2025-12-06 10:14:41.423923193 +0000 UTC m=+0.059784235 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec  6 05:14:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:41.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:14:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:41.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:14:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:43 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c0b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:43 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1009: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec  6 05:14:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:43.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:14:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:43.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:14:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:44 np0005548915 nova_compute[254819]: 2025-12-06 10:14:44.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:14:44 np0005548915 nova_compute[254819]: 2025-12-06 10:14:44.764 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:14:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:45 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1010: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  6 05:14:45 np0005548915 nova_compute[254819]: 2025-12-06 10:14:45.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:14:45 np0005548915 nova_compute[254819]: 2025-12-06 10:14:45.794 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:14:45 np0005548915 nova_compute[254819]: 2025-12-06 10:14:45.795 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:14:45 np0005548915 nova_compute[254819]: 2025-12-06 10:14:45.795 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:14:45 np0005548915 nova_compute[254819]: 2025-12-06 10:14:45.796 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:14:45 np0005548915 nova_compute[254819]: 2025-12-06 10:14:45.796 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:14:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:45.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:45.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:14:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2548017511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:14:46 np0005548915 nova_compute[254819]: 2025-12-06 10:14:46.270 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:14:46 np0005548915 nova_compute[254819]: 2025-12-06 10:14:46.384 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:46 np0005548915 podman[275920]: 2025-12-06 10:14:46.409753073 +0000 UTC m=+0.088055908 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 05:14:46 np0005548915 nova_compute[254819]: 2025-12-06 10:14:46.412 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:14:46 np0005548915 nova_compute[254819]: 2025-12-06 10:14:46.413 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:14:46 np0005548915 nova_compute[254819]: 2025-12-06 10:14:46.580 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:14:46 np0005548915 nova_compute[254819]: 2025-12-06 10:14:46.581 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4306MB free_disk=59.89735412597656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:14:46 np0005548915 nova_compute[254819]: 2025-12-06 10:14:46.581 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:14:46 np0005548915 nova_compute[254819]: 2025-12-06 10:14:46.581 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:14:46 np0005548915 nova_compute[254819]: 2025-12-06 10:14:46.655 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 1a910dd4-6c75-4618-8b34-925e2d30f8b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 05:14:46 np0005548915 nova_compute[254819]: 2025-12-06 10:14:46.655 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:14:46 np0005548915 nova_compute[254819]: 2025-12-06 10:14:46.655 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:14:46 np0005548915 nova_compute[254819]: 2025-12-06 10:14:46.700 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:14:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c0d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:14:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4025771904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:14:47 np0005548915 nova_compute[254819]: 2025-12-06 10:14:47.132 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:14:47 np0005548915 nova_compute[254819]: 2025-12-06 10:14:47.138 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:14:47 np0005548915 nova_compute[254819]: 2025-12-06 10:14:47.155 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:14:47 np0005548915 nova_compute[254819]: 2025-12-06 10:14:47.176 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:14:47 np0005548915 nova_compute[254819]: 2025-12-06 10:14:47.177 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:14:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:47 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1011: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  6 05:14:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:47.659Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:14:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:47.659Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:14:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:47.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:47.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c0f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:49.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:14:49 np0005548915 nova_compute[254819]: 2025-12-06 10:14:49.178 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:14:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:49 np0005548915 nova_compute[254819]: 2025-12-06 10:14:49.202 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:14:49 np0005548915 nova_compute[254819]: 2025-12-06 10:14:49.203 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:14:49 np0005548915 nova_compute[254819]: 2025-12-06 10:14:49.203 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:14:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1012: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  6 05:14:49 np0005548915 nova_compute[254819]: 2025-12-06 10:14:49.403 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:14:49 np0005548915 nova_compute[254819]: 2025-12-06 10:14:49.404 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:14:49 np0005548915 nova_compute[254819]: 2025-12-06 10:14:49.404 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 05:14:49 np0005548915 nova_compute[254819]: 2025-12-06 10:14:49.405 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1a910dd4-6c75-4618-8b34-925e2d30f8b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:14:49 np0005548915 nova_compute[254819]: 2025-12-06 10:14:49.542 254824 INFO nova.compute.manager [None req-d2fa06a1-9829-407d-8f98-4e0b86cdd372 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Get console output#033[00m
Dec  6 05:14:49 np0005548915 nova_compute[254819]: 2025-12-06 10:14:49.546 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 05:14:49 np0005548915 nova_compute[254819]: 2025-12-06 10:14:49.766 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:49.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:14:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:49.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:50] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec  6 05:14:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:50] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec  6 05:14:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:51 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:51 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:51.230 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:14:51 np0005548915 nova_compute[254819]: 2025-12-06 10:14:51.231 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:51 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:51.232 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 05:14:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1013: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  6 05:14:51 np0005548915 nova_compute[254819]: 2025-12-06 10:14:51.387 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:51 np0005548915 nova_compute[254819]: 2025-12-06 10:14:51.498 254824 DEBUG nova.compute.manager [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:14:51 np0005548915 nova_compute[254819]: 2025-12-06 10:14:51.498 254824 DEBUG nova.compute.manager [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing instance network info cache due to event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:14:51 np0005548915 nova_compute[254819]: 2025-12-06 10:14:51.499 254824 DEBUG oslo_concurrency.lockutils [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:14:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:51.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:51.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:52 np0005548915 podman[275977]: 2025-12-06 10:14:52.425219495 +0000 UTC m=+0.052055626 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  6 05:14:52 np0005548915 nova_compute[254819]: 2025-12-06 10:14:52.513 254824 INFO nova.compute.manager [None req-9069f500-368b-4a42-8213-b99b4f718ed7 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Get console output#033[00m
Dec  6 05:14:52 np0005548915 nova_compute[254819]: 2025-12-06 10:14:52.517 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 05:14:52 np0005548915 nova_compute[254819]: 2025-12-06 10:14:52.645 254824 DEBUG nova.compute.manager [req-4e694c51-2e5b-4861-9456-78e74d040c6f req-ea85552f-bf5f-422a-93d1-b732129a13e8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-unplugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:14:52 np0005548915 nova_compute[254819]: 2025-12-06 10:14:52.645 254824 DEBUG oslo_concurrency.lockutils [req-4e694c51-2e5b-4861-9456-78e74d040c6f req-ea85552f-bf5f-422a-93d1-b732129a13e8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:14:52 np0005548915 nova_compute[254819]: 2025-12-06 10:14:52.646 254824 DEBUG oslo_concurrency.lockutils [req-4e694c51-2e5b-4861-9456-78e74d040c6f req-ea85552f-bf5f-422a-93d1-b732129a13e8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:14:52 np0005548915 nova_compute[254819]: 2025-12-06 10:14:52.646 254824 DEBUG oslo_concurrency.lockutils [req-4e694c51-2e5b-4861-9456-78e74d040c6f req-ea85552f-bf5f-422a-93d1-b732129a13e8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:14:52 np0005548915 nova_compute[254819]: 2025-12-06 10:14:52.647 254824 DEBUG nova.compute.manager [req-4e694c51-2e5b-4861-9456-78e74d040c6f req-ea85552f-bf5f-422a-93d1-b732129a13e8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] No waiting events found dispatching network-vif-unplugged-6848cb43-8472-434b-a796-f96c3ce423e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:14:52 np0005548915 nova_compute[254819]: 2025-12-06 10:14:52.647 254824 WARNING nova.compute.manager [req-4e694c51-2e5b-4861-9456-78e74d040c6f req-ea85552f-bf5f-422a-93d1-b732129a13e8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received unexpected event network-vif-unplugged-6848cb43-8472-434b-a796-f96c3ce423e2 for instance with vm_state active and task_state None.#033[00m
Dec  6 05:14:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:53 np0005548915 nova_compute[254819]: 2025-12-06 10:14:53.100 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:14:53 np0005548915 nova_compute[254819]: 2025-12-06 10:14:53.120 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:14:53 np0005548915 nova_compute[254819]: 2025-12-06 10:14:53.121 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 05:14:53 np0005548915 nova_compute[254819]: 2025-12-06 10:14:53.121 254824 DEBUG oslo_concurrency.lockutils [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:14:53 np0005548915 nova_compute[254819]: 2025-12-06 10:14:53.121 254824 DEBUG nova.network.neutron [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:14:53 np0005548915 nova_compute[254819]: 2025-12-06 10:14:53.123 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:14:53 np0005548915 nova_compute[254819]: 2025-12-06 10:14:53.124 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:14:53 np0005548915 nova_compute[254819]: 2025-12-06 10:14:53.124 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:14:53 np0005548915 nova_compute[254819]: 2025-12-06 10:14:53.124 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:14:53 np0005548915 nova_compute[254819]: 2025-12-06 10:14:53.125 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:14:53 np0005548915 nova_compute[254819]: 2025-12-06 10:14:53.125 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:14:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:53 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1014: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec  6 05:14:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  6 05:14:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 05:14:53 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 05:14:53 np0005548915 nova_compute[254819]: 2025-12-06 10:14:53.689 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:14:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:53.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:53.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:14:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:14:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:14:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:14:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:14:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:14:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:14:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.227 254824 DEBUG nova.compute.manager [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.227 254824 DEBUG nova.compute.manager [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing instance network info cache due to event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.228 254824 DEBUG oslo_concurrency.lockutils [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:14:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:54.245 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:14:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:54.245 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:14:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:54.246 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.492 254824 INFO nova.compute.manager [None req-2652a2b8-eb0a-4ac4-af44-f6929c0c85ed 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Get console output#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.499 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 05:14:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.768 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.791 254824 DEBUG nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.792 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.792 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.793 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.793 254824 DEBUG nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] No waiting events found dispatching network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.793 254824 WARNING nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received unexpected event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 for instance with vm_state active and task_state None.#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.793 254824 DEBUG nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.793 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.794 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.794 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.794 254824 DEBUG nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] No waiting events found dispatching network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.794 254824 WARNING nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received unexpected event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 for instance with vm_state active and task_state None.#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.794 254824 DEBUG nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.795 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.795 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.795 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.795 254824 DEBUG nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] No waiting events found dispatching network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.795 254824 WARNING nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received unexpected event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 for instance with vm_state active and task_state None.#033[00m
Dec  6 05:14:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:14:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 05:14:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:14:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 05:14:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.928 254824 DEBUG nova.network.neutron [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updated VIF entry in instance network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.929 254824 DEBUG nova.network.neutron [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.947 254824 DEBUG oslo_concurrency.lockutils [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.948 254824 DEBUG oslo_concurrency.lockutils [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:14:54 np0005548915 nova_compute[254819]: 2025-12-06 10:14:54.948 254824 DEBUG nova.network.neutron [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:14:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:55 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 05:14:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1015: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 15 KiB/s wr, 1 op/s
Dec  6 05:14:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:14:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 05:14:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:14:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  6 05:14:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 05:14:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:55.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:55 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:14:55 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:14:55 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:14:55 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:14:55 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 05:14:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:14:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:55.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:14:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1016: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 17 KiB/s wr, 1 op/s
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/844916991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:14:56 np0005548915 nova_compute[254819]: 2025-12-06 10:14:56.432 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:56 np0005548915 podman[276198]: 2025-12-06 10:14:56.795621648 +0000 UTC m=+0.055177350 container create fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  6 05:14:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:56 np0005548915 systemd[1]: Started libpod-conmon-fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b.scope.
Dec  6 05:14:56 np0005548915 podman[276198]: 2025-12-06 10:14:56.771654872 +0000 UTC m=+0.031210594 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:14:56 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:14:56 np0005548915 podman[276198]: 2025-12-06 10:14:56.908544288 +0000 UTC m=+0.168100010 container init fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_gagarin, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 05:14:56 np0005548915 podman[276198]: 2025-12-06 10:14:56.917845909 +0000 UTC m=+0.177401611 container start fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:14:56 np0005548915 podman[276198]: 2025-12-06 10:14:56.922053993 +0000 UTC m=+0.181609715 container attach fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_gagarin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:14:56 np0005548915 flamboyant_gagarin[276215]: 167 167
Dec  6 05:14:56 np0005548915 systemd[1]: libpod-fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b.scope: Deactivated successfully.
Dec  6 05:14:56 np0005548915 podman[276198]: 2025-12-06 10:14:56.927069627 +0000 UTC m=+0.186625349 container died fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_gagarin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:14:56 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:14:56 np0005548915 systemd[1]: var-lib-containers-storage-overlay-c125087e2dbae22c157b6a8073fc68365cf51a4f07e308e6796ea093044b1c11-merged.mount: Deactivated successfully.
Dec  6 05:14:56 np0005548915 podman[276198]: 2025-12-06 10:14:56.971460486 +0000 UTC m=+0.231016218 container remove fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_gagarin, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 05:14:56 np0005548915 systemd[1]: libpod-conmon-fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b.scope: Deactivated successfully.
Dec  6 05:14:57 np0005548915 podman[276238]: 2025-12-06 10:14:57.191938479 +0000 UTC m=+0.064871972 container create 87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_black, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 05:14:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:57 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:57 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:14:57.235 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:14:57 np0005548915 systemd[1]: Started libpod-conmon-87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040.scope.
Dec  6 05:14:57 np0005548915 podman[276238]: 2025-12-06 10:14:57.168673952 +0000 UTC m=+0.041607495 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:14:57 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:14:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924ae4ea2bd64efd85477eaad569fbf88450b1c9dffb3857926f4fb1336b84c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:14:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924ae4ea2bd64efd85477eaad569fbf88450b1c9dffb3857926f4fb1336b84c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:14:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924ae4ea2bd64efd85477eaad569fbf88450b1c9dffb3857926f4fb1336b84c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:14:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924ae4ea2bd64efd85477eaad569fbf88450b1c9dffb3857926f4fb1336b84c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:14:57 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924ae4ea2bd64efd85477eaad569fbf88450b1c9dffb3857926f4fb1336b84c2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:14:57 np0005548915 podman[276238]: 2025-12-06 10:14:57.298590409 +0000 UTC m=+0.171523992 container init 87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_black, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:14:57 np0005548915 podman[276238]: 2025-12-06 10:14:57.312609008 +0000 UTC m=+0.185542541 container start 87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_black, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:14:57 np0005548915 podman[276238]: 2025-12-06 10:14:57.317082489 +0000 UTC m=+0.190015992 container attach 87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_black, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:14:57 np0005548915 nova_compute[254819]: 2025-12-06 10:14:57.393 254824 DEBUG nova.network.neutron [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updated VIF entry in instance network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:14:57 np0005548915 nova_compute[254819]: 2025-12-06 10:14:57.393 254824 DEBUG nova.network.neutron [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:14:57 np0005548915 nova_compute[254819]: 2025-12-06 10:14:57.414 254824 DEBUG oslo_concurrency.lockutils [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:14:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:57.660Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:14:57 np0005548915 boring_black[276254]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:14:57 np0005548915 boring_black[276254]: --> All data devices are unavailable
Dec  6 05:14:57 np0005548915 systemd[1]: libpod-87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040.scope: Deactivated successfully.
Dec  6 05:14:57 np0005548915 podman[276238]: 2025-12-06 10:14:57.75490179 +0000 UTC m=+0.627835303 container died 87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:14:57 np0005548915 systemd[1]: var-lib-containers-storage-overlay-924ae4ea2bd64efd85477eaad569fbf88450b1c9dffb3857926f4fb1336b84c2-merged.mount: Deactivated successfully.
Dec  6 05:14:57 np0005548915 podman[276238]: 2025-12-06 10:14:57.79897896 +0000 UTC m=+0.671912463 container remove 87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  6 05:14:57 np0005548915 systemd[1]: libpod-conmon-87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040.scope: Deactivated successfully.
Dec  6 05:14:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:14:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:57.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:14:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:57.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:14:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1017: 337 pgs: 337 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 23 KiB/s wr, 33 op/s
Dec  6 05:14:58 np0005548915 podman[276372]: 2025-12-06 10:14:58.479144554 +0000 UTC m=+0.039742454 container create da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_colden, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 05:14:58 np0005548915 systemd[1]: Started libpod-conmon-da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41.scope.
Dec  6 05:14:58 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:14:58 np0005548915 podman[276372]: 2025-12-06 10:14:58.544611122 +0000 UTC m=+0.105209022 container init da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_colden, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:14:58 np0005548915 podman[276372]: 2025-12-06 10:14:58.55342334 +0000 UTC m=+0.114021270 container start da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_colden, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:14:58 np0005548915 podman[276372]: 2025-12-06 10:14:58.557760767 +0000 UTC m=+0.118358667 container attach da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:14:58 np0005548915 podman[276372]: 2025-12-06 10:14:58.462039872 +0000 UTC m=+0.022637792 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:14:58 np0005548915 objective_colden[276388]: 167 167
Dec  6 05:14:58 np0005548915 systemd[1]: libpod-da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41.scope: Deactivated successfully.
Dec  6 05:14:58 np0005548915 podman[276372]: 2025-12-06 10:14:58.560900722 +0000 UTC m=+0.121498622 container died da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 05:14:58 np0005548915 systemd[1]: var-lib-containers-storage-overlay-0e5c1f2a26ee874f0d6580fb7f0979ad20c13b2923b0b6db5ef3571ee1168769-merged.mount: Deactivated successfully.
Dec  6 05:14:58 np0005548915 podman[276372]: 2025-12-06 10:14:58.602858585 +0000 UTC m=+0.163456485 container remove da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_colden, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Dec  6 05:14:58 np0005548915 systemd[1]: libpod-conmon-da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41.scope: Deactivated successfully.
Dec  6 05:14:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:58 np0005548915 podman[276412]: 2025-12-06 10:14:58.84010707 +0000 UTC m=+0.072133278 container create 83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:14:58 np0005548915 systemd[1]: Started libpod-conmon-83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1.scope.
Dec  6 05:14:58 np0005548915 podman[276412]: 2025-12-06 10:14:58.815893467 +0000 UTC m=+0.047919695 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:14:58 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:14:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29506dd50fac301147c3ccb1745b701d3c17e720261d898d7d7bdcb228d2ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:14:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29506dd50fac301147c3ccb1745b701d3c17e720261d898d7d7bdcb228d2ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:14:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29506dd50fac301147c3ccb1745b701d3c17e720261d898d7d7bdcb228d2ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:14:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29506dd50fac301147c3ccb1745b701d3c17e720261d898d7d7bdcb228d2ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:14:58 np0005548915 podman[276412]: 2025-12-06 10:14:58.946179484 +0000 UTC m=+0.178205712 container init 83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 05:14:58 np0005548915 podman[276412]: 2025-12-06 10:14:58.953426801 +0000 UTC m=+0.185453049 container start 83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  6 05:14:58 np0005548915 podman[276412]: 2025-12-06 10:14:58.958087826 +0000 UTC m=+0.190114074 container attach 83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:14:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:59.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:14:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:59.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:14:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:59 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]: {
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:    "1": [
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:        {
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:            "devices": [
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:                "/dev/loop3"
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:            ],
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:            "lv_name": "ceph_lv0",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:            "lv_size": "21470642176",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:            "name": "ceph_lv0",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:            "tags": {
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:                "ceph.cluster_name": "ceph",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:                "ceph.crush_device_class": "",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:                "ceph.encrypted": "0",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:                "ceph.osd_id": "1",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:                "ceph.type": "block",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:                "ceph.vdo": "0",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:                "ceph.with_tpm": "0"
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:            },
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:            "type": "block",
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:            "vg_name": "ceph_vg0"
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:        }
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]:    ]
Dec  6 05:14:59 np0005548915 hungry_heisenberg[276428]: }
Dec  6 05:14:59 np0005548915 systemd[1]: libpod-83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1.scope: Deactivated successfully.
Dec  6 05:14:59 np0005548915 podman[276412]: 2025-12-06 10:14:59.284381406 +0000 UTC m=+0.516407614 container died 83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  6 05:14:59 np0005548915 systemd[1]: var-lib-containers-storage-overlay-cf29506dd50fac301147c3ccb1745b701d3c17e720261d898d7d7bdcb228d2ef-merged.mount: Deactivated successfully.
Dec  6 05:14:59 np0005548915 podman[276412]: 2025-12-06 10:14:59.323945394 +0000 UTC m=+0.555971602 container remove 83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  6 05:14:59 np0005548915 systemd[1]: libpod-conmon-83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1.scope: Deactivated successfully.
Dec  6 05:14:59 np0005548915 nova_compute[254819]: 2025-12-06 10:14:59.772 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:14:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:14:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:59.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:14:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:14:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:14:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:14:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:59.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:00 np0005548915 podman[276541]: 2025-12-06 10:15:00.02564098 +0000 UTC m=+0.051525132 container create b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  6 05:15:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1018: 337 pgs: 337 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 9.1 KiB/s wr, 32 op/s
Dec  6 05:15:00 np0005548915 systemd[1]: Started libpod-conmon-b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f.scope.
Dec  6 05:15:00 np0005548915 podman[276541]: 2025-12-06 10:15:00.003473562 +0000 UTC m=+0.029357724 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:15:00 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:15:00 np0005548915 podman[276541]: 2025-12-06 10:15:00.133892993 +0000 UTC m=+0.159777125 container init b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 05:15:00 np0005548915 podman[276541]: 2025-12-06 10:15:00.142088465 +0000 UTC m=+0.167972587 container start b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_stonebraker, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 05:15:00 np0005548915 magical_stonebraker[276555]: 167 167
Dec  6 05:15:00 np0005548915 systemd[1]: libpod-b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f.scope: Deactivated successfully.
Dec  6 05:15:00 np0005548915 podman[276541]: 2025-12-06 10:15:00.150774929 +0000 UTC m=+0.176659331 container attach b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_stonebraker, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  6 05:15:00 np0005548915 podman[276541]: 2025-12-06 10:15:00.151621162 +0000 UTC m=+0.177505274 container died b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Dec  6 05:15:00 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1063a60207d01401fadb09e49ed08c2dd1f07bc63cdbe5a48cdba28df22d358a-merged.mount: Deactivated successfully.
Dec  6 05:15:00 np0005548915 podman[276541]: 2025-12-06 10:15:00.203045301 +0000 UTC m=+0.228929423 container remove b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_stonebraker, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:15:00 np0005548915 systemd[1]: libpod-conmon-b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f.scope: Deactivated successfully.
Dec  6 05:15:00 np0005548915 podman[276583]: 2025-12-06 10:15:00.399745882 +0000 UTC m=+0.049286192 container create 4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lalande, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  6 05:15:00 np0005548915 systemd[1]: Started libpod-conmon-4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe.scope.
Dec  6 05:15:00 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:15:00 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38575c2dc1c39ac28f5ba8880f9b63fa5f09b3cdf9a5f242593cc699a3f0ed5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:15:00 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38575c2dc1c39ac28f5ba8880f9b63fa5f09b3cdf9a5f242593cc699a3f0ed5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:15:00 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38575c2dc1c39ac28f5ba8880f9b63fa5f09b3cdf9a5f242593cc699a3f0ed5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:15:00 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38575c2dc1c39ac28f5ba8880f9b63fa5f09b3cdf9a5f242593cc699a3f0ed5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:15:00 np0005548915 podman[276583]: 2025-12-06 10:15:00.377710407 +0000 UTC m=+0.027250807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:15:00 np0005548915 podman[276583]: 2025-12-06 10:15:00.476669828 +0000 UTC m=+0.126210138 container init 4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:15:00 np0005548915 podman[276583]: 2025-12-06 10:15:00.489354191 +0000 UTC m=+0.138894531 container start 4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:15:00 np0005548915 podman[276583]: 2025-12-06 10:15:00.494803939 +0000 UTC m=+0.144344379 container attach 4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lalande, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 05:15:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:00] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec  6 05:15:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:00] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec  6 05:15:01 np0005548915 lvm[276674]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:15:01 np0005548915 lvm[276674]: VG ceph_vg0 finished
Dec  6 05:15:01 np0005548915 infallible_lalande[276599]: {}
Dec  6 05:15:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:01 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:01 np0005548915 systemd[1]: libpod-4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe.scope: Deactivated successfully.
Dec  6 05:15:01 np0005548915 systemd[1]: libpod-4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe.scope: Consumed 1.176s CPU time.
Dec  6 05:15:01 np0005548915 podman[276583]: 2025-12-06 10:15:01.233648478 +0000 UTC m=+0.883188788 container died 4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 05:15:01 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a38575c2dc1c39ac28f5ba8880f9b63fa5f09b3cdf9a5f242593cc699a3f0ed5-merged.mount: Deactivated successfully.
Dec  6 05:15:01 np0005548915 podman[276583]: 2025-12-06 10:15:01.286500795 +0000 UTC m=+0.936041095 container remove 4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:15:01 np0005548915 systemd[1]: libpod-conmon-4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe.scope: Deactivated successfully.
Dec  6 05:15:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:15:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:15:01 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:15:01 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:15:01 np0005548915 nova_compute[254819]: 2025-12-06 10:15:01.436 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:15:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:01.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:15:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:01.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:01 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:15:01 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:15:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1019: 337 pgs: 337 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 9.1 KiB/s wr, 32 op/s
Dec  6 05:15:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.065 254824 DEBUG nova.compute.manager [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.065 254824 DEBUG nova.compute.manager [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing instance network info cache due to event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.065 254824 DEBUG oslo_concurrency.lockutils [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.066 254824 DEBUG oslo_concurrency.lockutils [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.066 254824 DEBUG nova.network.neutron [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.182 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.182 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.183 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.183 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.183 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.184 254824 INFO nova.compute.manager [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Terminating instance#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.185 254824 DEBUG nova.compute.manager [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 05:15:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:03 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:03 np0005548915 kernel: tap6848cb43-84 (unregistering): left promiscuous mode
Dec  6 05:15:03 np0005548915 NetworkManager[48882]: <info>  [1765016103.2384] device (tap6848cb43-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.252 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:03 np0005548915 ovn_controller[152417]: 2025-12-06T10:15:03Z|00126|binding|INFO|Releasing lport 6848cb43-8472-434b-a796-f96c3ce423e2 from this chassis (sb_readonly=0)
Dec  6 05:15:03 np0005548915 ovn_controller[152417]: 2025-12-06T10:15:03Z|00127|binding|INFO|Setting lport 6848cb43-8472-434b-a796-f96c3ce423e2 down in Southbound
Dec  6 05:15:03 np0005548915 ovn_controller[152417]: 2025-12-06T10:15:03Z|00128|binding|INFO|Removing iface tap6848cb43-84 ovn-installed in OVS
Dec  6 05:15:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.260 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:47:c3 10.100.0.10'], port_security=['fa:16:3e:87:47:c3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1a910dd4-6c75-4618-8b34-925e2d30f8b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b1fd56fd-eb5a-422e-9da4-fb641a59e1a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1a37e6e-1014-49d4-9543-ee1567988851, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=6848cb43-8472-434b-a796-f96c3ce423e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:15:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.261 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 6848cb43-8472-434b-a796-f96c3ce423e2 in datapath ef8aaff1-03b0-4544-89c9-035c25f01e5c unbound from our chassis#033[00m
Dec  6 05:15:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.262 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef8aaff1-03b0-4544-89c9-035c25f01e5c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 05:15:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.263 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b0f03a-4623-47d0-8f41-96c8efb03a27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.264 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c namespace which is not needed anymore#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.273 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:03 np0005548915 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec  6 05:15:03 np0005548915 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000b.scope: Consumed 15.641s CPU time.
Dec  6 05:15:03 np0005548915 systemd-machined[216202]: Machine qemu-8-instance-0000000b terminated.
Dec  6 05:15:03 np0005548915 NetworkManager[48882]: <info>  [1765016103.4064] manager: (tap6848cb43-84): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.408 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.415 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:03 np0005548915 neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c[275726]: [NOTICE]   (275730) : haproxy version is 2.8.14-c23fe91
Dec  6 05:15:03 np0005548915 neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c[275726]: [NOTICE]   (275730) : path to executable is /usr/sbin/haproxy
Dec  6 05:15:03 np0005548915 neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c[275726]: [WARNING]  (275730) : Exiting Master process...
Dec  6 05:15:03 np0005548915 neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c[275726]: [ALERT]    (275730) : Current worker (275732) exited with code 143 (Terminated)
Dec  6 05:15:03 np0005548915 neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c[275726]: [WARNING]  (275730) : All workers exited. Exiting... (0)
Dec  6 05:15:03 np0005548915 systemd[1]: libpod-33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d.scope: Deactivated successfully.
Dec  6 05:15:03 np0005548915 podman[276739]: 2025-12-06 10:15:03.429248619 +0000 UTC m=+0.057210936 container died 33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.429 254824 INFO nova.virt.libvirt.driver [-] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Instance destroyed successfully.#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.430 254824 DEBUG nova.objects.instance [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 1a910dd4-6c75-4618-8b34-925e2d30f8b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.450 254824 DEBUG nova.virt.libvirt.vif [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-697052485',display_name='tempest-TestNetworkBasicOps-server-697052485',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-697052485',id=11,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAuhYKdKN9EDS1I/XZyg4WhafMZhuRCMz5uAEJQd26Rxd5WVAmZGHQIQO5WPFhGxsnRcRB0qgDKQ8dvJeA5b8MtdKHCXg8WKkLdZila9zexViJRw9mwokE7iqisT3z+5Ig==',key_name='tempest-TestNetworkBasicOps-1780141244',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:14:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-9i00mr91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:14:11Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=1a910dd4-6c75-4618-8b34-925e2d30f8b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.451 254824 DEBUG nova.network.os_vif_util [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.451 254824 DEBUG nova.network.os_vif_util [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:47:c3,bridge_name='br-int',has_traffic_filtering=True,id=6848cb43-8472-434b-a796-f96c3ce423e2,network=Network(ef8aaff1-03b0-4544-89c9-035c25f01e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6848cb43-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.451 254824 DEBUG os_vif [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:47:c3,bridge_name='br-int',has_traffic_filtering=True,id=6848cb43-8472-434b-a796-f96c3ce423e2,network=Network(ef8aaff1-03b0-4544-89c9-035c25f01e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6848cb43-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.453 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.453 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6848cb43-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.455 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.457 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.460 254824 INFO os_vif [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:47:c3,bridge_name='br-int',has_traffic_filtering=True,id=6848cb43-8472-434b-a796-f96c3ce423e2,network=Network(ef8aaff1-03b0-4544-89c9-035c25f01e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6848cb43-84')#033[00m
Dec  6 05:15:03 np0005548915 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d-userdata-shm.mount: Deactivated successfully.
Dec  6 05:15:03 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1b93e3df8fb7a26445c0dd9f79f250dbd57ab6146ffb6d9a8c76505e995ddf4d-merged.mount: Deactivated successfully.
Dec  6 05:15:03 np0005548915 podman[276739]: 2025-12-06 10:15:03.476289679 +0000 UTC m=+0.104251996 container cleanup 33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 05:15:03 np0005548915 systemd[1]: libpod-conmon-33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d.scope: Deactivated successfully.
Dec  6 05:15:03 np0005548915 podman[276795]: 2025-12-06 10:15:03.550790711 +0000 UTC m=+0.045854250 container remove 33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 05:15:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.559 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6552a6-1729-4df3-9779-3b8fd01e528d]: (4, ('Sat Dec  6 10:15:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c (33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d)\n33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d\nSat Dec  6 10:15:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c (33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d)\n33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.562 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[31c69eb4-c2eb-4b44-b505-3d7a74c441f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.563 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef8aaff1-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.566 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:03 np0005548915 kernel: tapef8aaff1-00: left promiscuous mode
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.580 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.584 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[44939c2c-a223-4707-861b-6323e43da863]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.604 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[168107c5-1feb-40d4-8902-117d78b0e3ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.605 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[dd3f08da-4125-4931-aebb-03960386b98e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.623 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[4d97f407-eff2-4a5d-b497-db4ff09bf242]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442102, 'reachable_time': 33883, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276813, 'error': None, 'target': 'ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:03 np0005548915 systemd[1]: run-netns-ovnmeta\x2def8aaff1\x2d03b0\x2d4544\x2d89c9\x2d035c25f01e5c.mount: Deactivated successfully.
Dec  6 05:15:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.627 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 05:15:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.627 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[9075aaa6-c613-42ae-bcbb-bc1ff3a37079]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:03.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.868 254824 INFO nova.virt.libvirt.driver [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Deleting instance files /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9_del#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.869 254824 INFO nova.virt.libvirt.driver [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Deletion of /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9_del complete#033[00m
Dec  6 05:15:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:03.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.956 254824 INFO nova.compute.manager [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.957 254824 DEBUG oslo.service.loopingcall [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.957 254824 DEBUG nova.compute.manager [-] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 05:15:03 np0005548915 nova_compute[254819]: 2025-12-06 10:15:03.957 254824 DEBUG nova.network.neutron [-] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 05:15:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1020: 337 pgs: 337 active+clean; 121 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 7.2 KiB/s wr, 32 op/s
Dec  6 05:15:04 np0005548915 nova_compute[254819]: 2025-12-06 10:15:04.476 254824 DEBUG nova.network.neutron [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updated VIF entry in instance network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:15:04 np0005548915 nova_compute[254819]: 2025-12-06 10:15:04.477 254824 DEBUG nova.network.neutron [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:15:04 np0005548915 nova_compute[254819]: 2025-12-06 10:15:04.494 254824 DEBUG oslo_concurrency.lockutils [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:15:04 np0005548915 nova_compute[254819]: 2025-12-06 10:15:04.540 254824 DEBUG nova.network.neutron [-] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:15:04 np0005548915 nova_compute[254819]: 2025-12-06 10:15:04.558 254824 INFO nova.compute.manager [-] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Took 0.60 seconds to deallocate network for instance.#033[00m
Dec  6 05:15:04 np0005548915 nova_compute[254819]: 2025-12-06 10:15:04.630 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:15:04 np0005548915 nova_compute[254819]: 2025-12-06 10:15:04.631 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:15:04 np0005548915 nova_compute[254819]: 2025-12-06 10:15:04.670 254824 DEBUG oslo_concurrency.processutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:15:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:15:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:15:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815512636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.118 254824 DEBUG oslo_concurrency.processutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.124 254824 DEBUG nova.compute.provider_tree [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.139 254824 DEBUG nova.compute.manager [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-unplugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.140 254824 DEBUG oslo_concurrency.lockutils [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.140 254824 DEBUG oslo_concurrency.lockutils [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.141 254824 DEBUG oslo_concurrency.lockutils [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.141 254824 DEBUG nova.compute.manager [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] No waiting events found dispatching network-vif-unplugged-6848cb43-8472-434b-a796-f96c3ce423e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.141 254824 WARNING nova.compute.manager [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received unexpected event network-vif-unplugged-6848cb43-8472-434b-a796-f96c3ce423e2 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.141 254824 DEBUG nova.compute.manager [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.142 254824 DEBUG oslo_concurrency.lockutils [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.142 254824 DEBUG oslo_concurrency.lockutils [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.142 254824 DEBUG oslo_concurrency.lockutils [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.143 254824 DEBUG nova.compute.manager [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] No waiting events found dispatching network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.143 254824 WARNING nova.compute.manager [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received unexpected event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.143 254824 DEBUG nova.compute.manager [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-deleted-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.145 254824 DEBUG nova.scheduler.client.report [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.163 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.190 254824 INFO nova.scheduler.client.report [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 1a910dd4-6c75-4618-8b34-925e2d30f8b9#033[00m
Dec  6 05:15:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:05 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:05 np0005548915 nova_compute[254819]: 2025-12-06 10:15:05.242 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:15:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:15:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:05.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:15:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:05.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1021: 337 pgs: 337 active+clean; 121 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 7.2 KiB/s wr, 32 op/s
Dec  6 05:15:06 np0005548915 nova_compute[254819]: 2025-12-06 10:15:06.492 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:07 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a0a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:07.661Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:15:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:07.661Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:15:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:15:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:07.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:15:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:07.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1022: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 7.7 KiB/s wr, 56 op/s
Dec  6 05:15:08 np0005548915 nova_compute[254819]: 2025-12-06 10:15:08.457 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=infra.usagestats t=2025-12-06T10:15:08.588248076Z level=info msg="Usage stats are ready to report"
Dec  6 05:15:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:15:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:15:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:09.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:15:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:09 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4002900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:15:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:09.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:15:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:15:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:09.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1023: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec  6 05:15:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a0c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  6 05:15:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  6 05:15:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:11 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:11 np0005548915 nova_compute[254819]: 2025-12-06 10:15:11.540 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:11.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:11.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1024: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec  6 05:15:12 np0005548915 nova_compute[254819]: 2025-12-06 10:15:12.436 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:12 np0005548915 podman[276847]: 2025-12-06 10:15:12.439694616 +0000 UTC m=+0.071607745 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 05:15:12 np0005548915 nova_compute[254819]: 2025-12-06 10:15:12.519 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4002900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a0e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:13 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:13 np0005548915 nova_compute[254819]: 2025-12-06 10:15:13.460 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:13.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:13.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1025: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec  6 05:15:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4002900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:15:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:15 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a100 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:15.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:15.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1026: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  6 05:15:16 np0005548915 nova_compute[254819]: 2025-12-06 10:15:16.541 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:17 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40043a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:17 np0005548915 podman[276897]: 2025-12-06 10:15:17.511497539 +0000 UTC m=+0.142429266 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 05:15:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:17.662Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:15:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:17.665Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:15:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:15:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:17.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:15:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:17.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1027: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  6 05:15:18 np0005548915 nova_compute[254819]: 2025-12-06 10:15:18.424 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765016103.4228678, 1a910dd4-6c75-4618-8b34-925e2d30f8b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:15:18 np0005548915 nova_compute[254819]: 2025-12-06 10:15:18.424 254824 INFO nova.compute.manager [-] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] VM Stopped (Lifecycle Event)#033[00m
Dec  6 05:15:18 np0005548915 nova_compute[254819]: 2025-12-06 10:15:18.447 254824 DEBUG nova.compute.manager [None req-42acda88-9785-4123-b4c2-f84a2d40a264 - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:15:18 np0005548915 nova_compute[254819]: 2025-12-06 10:15:18.464 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a120 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:19.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:15:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:19.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:15:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:19.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1028: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:15:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40043a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:20] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  6 05:15:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:20] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  6 05:15:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:21 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:21 np0005548915 nova_compute[254819]: 2025-12-06 10:15:21.542 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:15:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:21.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:15:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:21.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1029: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:15:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:23 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:23 np0005548915 podman[276930]: 2025-12-06 10:15:23.435555511 +0000 UTC m=+0.067896804 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  6 05:15:23 np0005548915 nova_compute[254819]: 2025-12-06 10:15:23.466 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:23.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:15:23
Dec  6 05:15:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:15:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:15:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['.rgw.root', 'images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.nfs', 'default.rgw.control', 'backups']
Dec  6 05:15:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:15:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:23.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:15:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1030: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:15:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:15:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:15:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:25.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:25.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1031: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:15:26 np0005548915 nova_compute[254819]: 2025-12-06 10:15:26.544 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a180 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:27 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:27.666Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:15:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:27.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:27.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1032: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 05:15:28 np0005548915 nova_compute[254819]: 2025-12-06 10:15:28.469 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:28 np0005548915 nova_compute[254819]: 2025-12-06 10:15:28.513 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:15:28 np0005548915 nova_compute[254819]: 2025-12-06 10:15:28.513 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:15:28 np0005548915 nova_compute[254819]: 2025-12-06 10:15:28.532 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 05:15:28 np0005548915 nova_compute[254819]: 2025-12-06 10:15:28.625 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:15:28 np0005548915 nova_compute[254819]: 2025-12-06 10:15:28.626 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:15:28 np0005548915 nova_compute[254819]: 2025-12-06 10:15:28.631 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 05:15:28 np0005548915 nova_compute[254819]: 2025-12-06 10:15:28.632 254824 INFO nova.compute.claims [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  6 05:15:28 np0005548915 nova_compute[254819]: 2025-12-06 10:15:28.739 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:15:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a1a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:29.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:15:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:15:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3798072465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.174 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.179 254824 DEBUG nova.compute.provider_tree [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.195 254824 DEBUG nova.scheduler.client.report [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.215 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.215 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 05:15:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:29 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.257 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.257 254824 DEBUG nova.network.neutron [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.277 254824 INFO nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.293 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.386 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.388 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.388 254824 INFO nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Creating image(s)#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.414 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.439 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.465 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.469 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.498 254824 DEBUG nova.policy [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.523 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.524 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.524 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.525 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.557 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.560 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.811 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:15:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:15:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:29.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:15:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:15:29 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.882 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 05:15:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:29.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:30 np0005548915 nova_compute[254819]: 2025-12-06 10:15:29.999 254824 DEBUG nova.objects.instance [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 7ebb0f0e-b16a-451f-b85a-623f5bcf704f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:15:30 np0005548915 nova_compute[254819]: 2025-12-06 10:15:30.023 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 05:15:30 np0005548915 nova_compute[254819]: 2025-12-06 10:15:30.023 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Ensure instance console log exists: /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 05:15:30 np0005548915 nova_compute[254819]: 2025-12-06 10:15:30.024 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:15:30 np0005548915 nova_compute[254819]: 2025-12-06 10:15:30.024 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:15:30 np0005548915 nova_compute[254819]: 2025-12-06 10:15:30.024 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:15:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1033: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:15:30 np0005548915 nova_compute[254819]: 2025-12-06 10:15:30.507 254824 DEBUG nova.network.neutron [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Successfully created port: ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 05:15:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:30] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  6 05:15:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:30] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  6 05:15:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:31 np0005548915 nova_compute[254819]: 2025-12-06 10:15:31.272 254824 DEBUG nova.network.neutron [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Successfully updated port: ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 05:15:31 np0005548915 nova_compute[254819]: 2025-12-06 10:15:31.289 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:15:31 np0005548915 nova_compute[254819]: 2025-12-06 10:15:31.289 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:15:31 np0005548915 nova_compute[254819]: 2025-12-06 10:15:31.290 254824 DEBUG nova.network.neutron [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 05:15:31 np0005548915 nova_compute[254819]: 2025-12-06 10:15:31.397 254824 DEBUG nova.compute.manager [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-changed-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:15:31 np0005548915 nova_compute[254819]: 2025-12-06 10:15:31.398 254824 DEBUG nova.compute.manager [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Refreshing instance network info cache due to event network-changed-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:15:31 np0005548915 nova_compute[254819]: 2025-12-06 10:15:31.398 254824 DEBUG oslo_concurrency.lockutils [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:15:31 np0005548915 nova_compute[254819]: 2025-12-06 10:15:31.545 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:15:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:31.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:15:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:31.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1034: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:15:32 np0005548915 nova_compute[254819]: 2025-12-06 10:15:32.405 254824 DEBUG nova.network.neutron [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 05:15:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:33 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.341 254824 DEBUG nova.network.neutron [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updating instance_info_cache with network_info: [{"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.370 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.371 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Instance network_info: |[{"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.372 254824 DEBUG oslo_concurrency.lockutils [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.372 254824 DEBUG nova.network.neutron [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Refreshing network info cache for port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.377 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Start _get_guest_xml network_info=[{"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.383 254824 WARNING nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.387 254824 DEBUG nova.virt.libvirt.host [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.387 254824 DEBUG nova.virt.libvirt.host [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.390 254824 DEBUG nova.virt.libvirt.host [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.392 254824 DEBUG nova.virt.libvirt.host [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.392 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.393 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.394 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.394 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.395 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.395 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.395 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.396 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.396 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.397 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.397 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.398 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.402 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.473 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:15:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:33.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:15:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:15:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/125323094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.905 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.932 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:15:33 np0005548915 nova_compute[254819]: 2025-12-06 10:15:33.936 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:15:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:33.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1035: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:15:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  6 05:15:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174551433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.381 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.384 254824 DEBUG nova.virt.libvirt.vif [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:15:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1823850228',display_name='tempest-TestNetworkBasicOps-server-1823850228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1823850228',id=13,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZ1JbYqKoCUxIiM8hDMgdZSsRHQUcoBjRF2DOasdBtdUJsR/+RRaag7cOntBUu6Pnxm7ZLVxvld0ACRX3Mi2/RpeAQ5OWV7PuIX+IEnS95lS5yg27/v0AunJEPN78t9BQ==',key_name='tempest-TestNetworkBasicOps-539349224',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-tzfybp9r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:15:29Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=7ebb0f0e-b16a-451f-b85a-623f5bcf704f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.385 254824 DEBUG nova.network.os_vif_util [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.385 254824 DEBUG nova.network.os_vif_util [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:72:5e,bridge_name='br-int',has_traffic_filtering=True,id=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd,network=Network(f19a420c-d088-44ba-92a5-ba4d8025ce6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea0f2c61-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.387 254824 DEBUG nova.objects.instance [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ebb0f0e-b16a-451f-b85a-623f5bcf704f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.405 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] End _get_guest_xml xml=<domain type="kvm">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  <uuid>7ebb0f0e-b16a-451f-b85a-623f5bcf704f</uuid>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  <name>instance-0000000d</name>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  <memory>131072</memory>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  <vcpu>1</vcpu>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  <metadata>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <nova:name>tempest-TestNetworkBasicOps-server-1823850228</nova:name>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <nova:creationTime>2025-12-06 10:15:33</nova:creationTime>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <nova:flavor name="m1.nano">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <nova:memory>128</nova:memory>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <nova:disk>1</nova:disk>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <nova:swap>0</nova:swap>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <nova:vcpus>1</nova:vcpus>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      </nova:flavor>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <nova:owner>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      </nova:owner>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <nova:ports>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <nova:port uuid="ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        </nova:port>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      </nova:ports>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    </nova:instance>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  </metadata>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  <sysinfo type="smbios">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <system>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <entry name="manufacturer">RDO</entry>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <entry name="product">OpenStack Compute</entry>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <entry name="serial">7ebb0f0e-b16a-451f-b85a-623f5bcf704f</entry>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <entry name="uuid">7ebb0f0e-b16a-451f-b85a-623f5bcf704f</entry>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <entry name="family">Virtual Machine</entry>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    </system>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  </sysinfo>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  <os>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <boot dev="hd"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <smbios mode="sysinfo"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  </os>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  <features>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <acpi/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <apic/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <vmcoreinfo/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  </features>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  <clock offset="utc">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <timer name="hpet" present="no"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  </clock>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  <cpu mode="host-model" match="exact">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  </cpu>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  <devices>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <disk type="network" device="disk">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <target dev="vda" bus="virtio"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <disk type="network" device="cdrom">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <driver type="raw" cache="none"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <source protocol="rbd" name="vms/7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk.config">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <host name="192.168.122.100" port="6789"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <host name="192.168.122.102" port="6789"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <host name="192.168.122.101" port="6789"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      </source>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <auth username="openstack">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:        <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      </auth>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <target dev="sda" bus="sata"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    </disk>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <interface type="ethernet">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <mac address="fa:16:3e:21:72:5e"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <mtu size="1442"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <target dev="tapea0f2c61-7d"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    </interface>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <serial type="pty">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <log file="/var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/console.log" append="off"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    </serial>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <video>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <model type="virtio"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    </video>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <input type="tablet" bus="usb"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <rng model="virtio">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <backend model="random">/dev/urandom</backend>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    </rng>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <controller type="usb" index="0"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    <memballoon model="virtio">
Dec  6 05:15:34 np0005548915 nova_compute[254819]:      <stats period="10"/>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:    </memballoon>
Dec  6 05:15:34 np0005548915 nova_compute[254819]:  </devices>
Dec  6 05:15:34 np0005548915 nova_compute[254819]: </domain>
Dec  6 05:15:34 np0005548915 nova_compute[254819]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.407 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Preparing to wait for external event network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.407 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.407 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.408 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.408 254824 DEBUG nova.virt.libvirt.vif [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:15:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1823850228',display_name='tempest-TestNetworkBasicOps-server-1823850228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1823850228',id=13,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZ1JbYqKoCUxIiM8hDMgdZSsRHQUcoBjRF2DOasdBtdUJsR/+RRaag7cOntBUu6Pnxm7ZLVxvld0ACRX3Mi2/RpeAQ5OWV7PuIX+IEnS95lS5yg27/v0AunJEPN78t9BQ==',key_name='tempest-TestNetworkBasicOps-539349224',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-tzfybp9r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:15:29Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=7ebb0f0e-b16a-451f-b85a-623f5bcf704f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.409 254824 DEBUG nova.network.os_vif_util [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.409 254824 DEBUG nova.network.os_vif_util [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:72:5e,bridge_name='br-int',has_traffic_filtering=True,id=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd,network=Network(f19a420c-d088-44ba-92a5-ba4d8025ce6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea0f2c61-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.409 254824 DEBUG os_vif [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:72:5e,bridge_name='br-int',has_traffic_filtering=True,id=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd,network=Network(f19a420c-d088-44ba-92a5-ba4d8025ce6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea0f2c61-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.410 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.410 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.411 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.413 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.413 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea0f2c61-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.414 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea0f2c61-7d, col_values=(('external_ids', {'iface-id': 'ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:72:5e', 'vm-uuid': '7ebb0f0e-b16a-451f-b85a-623f5bcf704f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.415 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:34 np0005548915 NetworkManager[48882]: <info>  [1765016134.4161] manager: (tapea0f2c61-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.417 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.421 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.422 254824 INFO os_vif [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:72:5e,bridge_name='br-int',has_traffic_filtering=True,id=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd,network=Network(f19a420c-d088-44ba-92a5-ba4d8025ce6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea0f2c61-7d')#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.480 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.481 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.481 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:21:72:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.481 254824 INFO nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Using config drive#033[00m
Dec  6 05:15:34 np0005548915 nova_compute[254819]: 2025-12-06 10:15:34.508 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:15:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.096 254824 INFO nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Creating config drive at /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/disk.config#033[00m
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.101 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppda7q39t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.126 254824 DEBUG nova.network.neutron [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updated VIF entry in instance network info cache for port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.127 254824 DEBUG nova.network.neutron [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updating instance_info_cache with network_info: [{"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.155 254824 DEBUG oslo_concurrency.lockutils [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.227 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppda7q39t" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:15:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:35 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.256 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.260 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/disk.config 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.404 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/disk.config 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.405 254824 INFO nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Deleting local config drive /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/disk.config because it was imported into RBD.#033[00m
Dec  6 05:15:35 np0005548915 kernel: tapea0f2c61-7d: entered promiscuous mode
Dec  6 05:15:35 np0005548915 NetworkManager[48882]: <info>  [1765016135.4576] manager: (tapea0f2c61-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.458 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:35 np0005548915 ovn_controller[152417]: 2025-12-06T10:15:35Z|00129|binding|INFO|Claiming lport ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd for this chassis.
Dec  6 05:15:35 np0005548915 ovn_controller[152417]: 2025-12-06T10:15:35Z|00130|binding|INFO|ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd: Claiming fa:16:3e:21:72:5e 10.100.0.7
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.462 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.465 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.472 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:72:5e 10.100.0.7'], port_security=['fa:16:3e:21:72:5e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7ebb0f0e-b16a-451f-b85a-623f5bcf704f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d5ca921-3bfd-449d-8b5d-30ae22ce26cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81c5896f-af1e-41c2-8dce-fe719e73d950, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.474 162267 INFO neutron.agent.ovn.metadata.agent [-] Port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd in datapath f19a420c-d088-44ba-92a5-ba4d8025ce6c bound to our chassis#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.475 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f19a420c-d088-44ba-92a5-ba4d8025ce6c#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.487 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[35f13e4c-5465-475d-92be-ab3ea9b13796]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.488 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf19a420c-d1 in ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.490 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf19a420c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.490 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2e1bb9-af8d-4e6a-8a47-10668dc3fec7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.491 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[36a8d2a9-2d79-4ea8-9c19-cdc1ca499874]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 systemd-machined[216202]: New machine qemu-9-instance-0000000d.
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.500 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[5d886b1d-8e9a-4851-a572-a13f7c7a0795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.525 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[dce3337d-531c-43de-8b1f-8e13eb9c5307]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 systemd[1]: Started Virtual Machine qemu-9-instance-0000000d.
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.530 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:35 np0005548915 ovn_controller[152417]: 2025-12-06T10:15:35Z|00131|binding|INFO|Setting lport ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd ovn-installed in OVS
Dec  6 05:15:35 np0005548915 ovn_controller[152417]: 2025-12-06T10:15:35Z|00132|binding|INFO|Setting lport ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd up in Southbound
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.536 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:35 np0005548915 systemd-udevd[277317]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 05:15:35 np0005548915 NetworkManager[48882]: <info>  [1765016135.5562] device (tapea0f2c61-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 05:15:35 np0005548915 NetworkManager[48882]: <info>  [1765016135.5573] device (tapea0f2c61-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.557 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2080d0-e5d5-4214-8945-1e1ce5e51fe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 NetworkManager[48882]: <info>  [1765016135.5649] manager: (tapf19a420c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.563 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[1f45c264-4908-46e4-856a-f264f8c0f18d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.590 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[7315ab38-26b7-493a-af90-b429b829aaef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.594 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[85bb3ea4-0944-447a-b0ba-7feeb568e582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 NetworkManager[48882]: <info>  [1765016135.6191] device (tapf19a420c-d0): carrier: link connected
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.623 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[3dded76e-1cfc-44aa-a06e-e5fc0a04147d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.638 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[752976eb-84c3-453a-9e78-c77a50c6e40d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf19a420c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:ae:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450610, 'reachable_time': 17981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277348, 'error': None, 'target': 'ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.663 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0ada4269-0c9e-41a4-92c2-5b6ff2c0aa51]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:ae99'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 450610, 'tstamp': 450610}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277349, 'error': None, 'target': 'ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.683 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3932cc-64e9-4301-970c-8afbee4c60fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf19a420c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:ae:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450610, 'reachable_time': 17981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277352, 'error': None, 'target': 'ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.710 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac0a40e-3178-4864-9a19-1127f02e32af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.776 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[bae67d47-9bf7-42d8-97cd-5441d5d7b98a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.777 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf19a420c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.777 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.778 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf19a420c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:15:35 np0005548915 kernel: tapf19a420c-d0: entered promiscuous mode
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.779 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:35 np0005548915 NetworkManager[48882]: <info>  [1765016135.7815] manager: (tapf19a420c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.782 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.782 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf19a420c-d0, col_values=(('external_ids', {'iface-id': 'e6dea8f3-ba9b-4ce4-acbb-0df65f10749a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.783 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:35 np0005548915 ovn_controller[152417]: 2025-12-06T10:15:35Z|00133|binding|INFO|Releasing lport e6dea8f3-ba9b-4ce4-acbb-0df65f10749a from this chassis (sb_readonly=0)
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.801 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.803 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f19a420c-d088-44ba-92a5-ba4d8025ce6c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f19a420c-d088-44ba-92a5-ba4d8025ce6c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.804 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[6e17d88c-a0b4-4ba5-8cbc-00761249211e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.805 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: global
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    log         /dev/log local0 debug
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    log-tag     haproxy-metadata-proxy-f19a420c-d088-44ba-92a5-ba4d8025ce6c
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    user        root
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    group       root
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    maxconn     1024
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    pidfile     /var/lib/neutron/external/pids/f19a420c-d088-44ba-92a5-ba4d8025ce6c.pid.haproxy
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    daemon
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: defaults
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    log global
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    mode http
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    option httplog
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    option dontlognull
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    option http-server-close
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    option forwardfor
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    retries                 3
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    timeout http-request    30s
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    timeout connect         30s
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    timeout client          32s
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    timeout server          32s
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    timeout http-keep-alive 30s
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: listen listener
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    bind 169.254.169.254:80
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]:    http-request add-header X-OVN-Network-ID f19a420c-d088-44ba-92a5-ba4d8025ce6c
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.805 254824 DEBUG nova.compute.manager [req-901abc2e-e182-468e-b312-a661b754f46b req-e55ea63c-63d2-478b-bf14-09e6f36b5efb d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.806 254824 DEBUG oslo_concurrency.lockutils [req-901abc2e-e182-468e-b312-a661b754f46b req-e55ea63c-63d2-478b-bf14-09e6f36b5efb d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.806 254824 DEBUG oslo_concurrency.lockutils [req-901abc2e-e182-468e-b312-a661b754f46b req-e55ea63c-63d2-478b-bf14-09e6f36b5efb d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.806 254824 DEBUG oslo_concurrency.lockutils [req-901abc2e-e182-468e-b312-a661b754f46b req-e55ea63c-63d2-478b-bf14-09e6f36b5efb d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:15:35 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.806 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'env', 'PROCESS_TAG=haproxy-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f19a420c-d088-44ba-92a5-ba4d8025ce6c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 05:15:35 np0005548915 nova_compute[254819]: 2025-12-06 10:15:35.806 254824 DEBUG nova.compute.manager [req-901abc2e-e182-468e-b312-a661b754f46b req-e55ea63c-63d2-478b-bf14-09e6f36b5efb d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Processing event network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 05:15:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:35.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:15:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:15:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1036: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  6 05:15:36 np0005548915 podman[277384]: 2025-12-06 10:15:36.191866761 +0000 UTC m=+0.054688068 container create 54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 05:15:36 np0005548915 systemd[1]: Started libpod-conmon-54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92.scope.
Dec  6 05:15:36 np0005548915 podman[277384]: 2025-12-06 10:15:36.161802049 +0000 UTC m=+0.024623366 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  6 05:15:36 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:15:36 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bbfc0c4f48e5f7f64a74644fc0e70facc48d03f5e9acb49b25622ca059f294b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 05:15:36 np0005548915 podman[277384]: 2025-12-06 10:15:36.279903318 +0000 UTC m=+0.142724675 container init 54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  6 05:15:36 np0005548915 podman[277384]: 2025-12-06 10:15:36.284665917 +0000 UTC m=+0.147487224 container start 54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 05:15:36 np0005548915 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [NOTICE]   (277403) : New worker (277405) forked
Dec  6 05:15:36 np0005548915 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [NOTICE]   (277403) : Loading success.
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.598 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:36 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.872 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765016136.8716395, 7ebb0f0e-b16a-451f-b85a-623f5bcf704f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.873 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] VM Started (Lifecycle Event)#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.876 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.881 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.886 254824 INFO nova.virt.libvirt.driver [-] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Instance spawned successfully.#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.886 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.892 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.895 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.907 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.908 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.908 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.908 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.909 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.909 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.917 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.917 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765016136.8770893, 7ebb0f0e-b16a-451f-b85a-623f5bcf704f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.918 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] VM Paused (Lifecycle Event)#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.944 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.947 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765016136.879893, 7ebb0f0e-b16a-451f-b85a-623f5bcf704f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.947 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] VM Resumed (Lifecycle Event)#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.972 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.974 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.978 254824 INFO nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Took 7.59 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 05:15:36 np0005548915 nova_compute[254819]: 2025-12-06 10:15:36.978 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:15:37 np0005548915 nova_compute[254819]: 2025-12-06 10:15:37.001 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 05:15:37 np0005548915 nova_compute[254819]: 2025-12-06 10:15:37.034 254824 INFO nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Took 8.44 seconds to build instance.#033[00m
Dec  6 05:15:37 np0005548915 nova_compute[254819]: 2025-12-06 10:15:37.052 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:15:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:37 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:37.668Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:15:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:15:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:37.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:15:37 np0005548915 nova_compute[254819]: 2025-12-06 10:15:37.881 254824 DEBUG nova.compute.manager [req-425f5244-0b2f-4ebf-a7b4-e515c64ec14d req-a2368387-2617-4dfa-b41b-1907992055fd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:15:37 np0005548915 nova_compute[254819]: 2025-12-06 10:15:37.882 254824 DEBUG oslo_concurrency.lockutils [req-425f5244-0b2f-4ebf-a7b4-e515c64ec14d req-a2368387-2617-4dfa-b41b-1907992055fd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:15:37 np0005548915 nova_compute[254819]: 2025-12-06 10:15:37.882 254824 DEBUG oslo_concurrency.lockutils [req-425f5244-0b2f-4ebf-a7b4-e515c64ec14d req-a2368387-2617-4dfa-b41b-1907992055fd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:15:37 np0005548915 nova_compute[254819]: 2025-12-06 10:15:37.882 254824 DEBUG oslo_concurrency.lockutils [req-425f5244-0b2f-4ebf-a7b4-e515c64ec14d req-a2368387-2617-4dfa-b41b-1907992055fd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:15:37 np0005548915 nova_compute[254819]: 2025-12-06 10:15:37.882 254824 DEBUG nova.compute.manager [req-425f5244-0b2f-4ebf-a7b4-e515c64ec14d req-a2368387-2617-4dfa-b41b-1907992055fd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] No waiting events found dispatching network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:15:37 np0005548915 nova_compute[254819]: 2025-12-06 10:15:37.883 254824 WARNING nova.compute.manager [req-425f5244-0b2f-4ebf-a7b4-e515c64ec14d req-a2368387-2617-4dfa-b41b-1907992055fd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received unexpected event network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd for instance with vm_state active and task_state None.#033[00m
Dec  6 05:15:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:37.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1037: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 638 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec  6 05:15:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:15:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:15:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:39.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:15:39 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:39 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:39 np0005548915 nova_compute[254819]: 2025-12-06 10:15:39.415 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:15:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:39.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:39.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1038: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 638 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec  6 05:15:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:40] "GET /metrics HTTP/1.1" 200 48472 "" "Prometheus/2.51.0"
Dec  6 05:15:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:40] "GET /metrics HTTP/1.1" 200 48472 "" "Prometheus/2.51.0"
Dec  6 05:15:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:41 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:41 np0005548915 ovn_controller[152417]: 2025-12-06T10:15:41Z|00134|binding|INFO|Releasing lport e6dea8f3-ba9b-4ce4-acbb-0df65f10749a from this chassis (sb_readonly=0)
Dec  6 05:15:41 np0005548915 nova_compute[254819]: 2025-12-06 10:15:41.531 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:41 np0005548915 NetworkManager[48882]: <info>  [1765016141.5325] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Dec  6 05:15:41 np0005548915 NetworkManager[48882]: <info>  [1765016141.5333] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Dec  6 05:15:41 np0005548915 ovn_controller[152417]: 2025-12-06T10:15:41Z|00135|binding|INFO|Releasing lport e6dea8f3-ba9b-4ce4-acbb-0df65f10749a from this chassis (sb_readonly=0)
Dec  6 05:15:41 np0005548915 nova_compute[254819]: 2025-12-06 10:15:41.569 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:41 np0005548915 nova_compute[254819]: 2025-12-06 10:15:41.574 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:41 np0005548915 nova_compute[254819]: 2025-12-06 10:15:41.599 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:15:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:41.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:15:41 np0005548915 nova_compute[254819]: 2025-12-06 10:15:41.964 254824 DEBUG nova.compute.manager [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-changed-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:15:41 np0005548915 nova_compute[254819]: 2025-12-06 10:15:41.965 254824 DEBUG nova.compute.manager [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Refreshing instance network info cache due to event network-changed-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:15:41 np0005548915 nova_compute[254819]: 2025-12-06 10:15:41.965 254824 DEBUG oslo_concurrency.lockutils [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:15:41 np0005548915 nova_compute[254819]: 2025-12-06 10:15:41.965 254824 DEBUG oslo_concurrency.lockutils [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:15:41 np0005548915 nova_compute[254819]: 2025-12-06 10:15:41.965 254824 DEBUG nova.network.neutron [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Refreshing network info cache for port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:15:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:15:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:41.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:15:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1039: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 638 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec  6 05:15:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:42 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:43 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:43 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:43 np0005548915 podman[277463]: 2025-12-06 10:15:43.422973155 +0000 UTC m=+0.052873719 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:15:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:43.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:43.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1040: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  6 05:15:44 np0005548915 nova_compute[254819]: 2025-12-06 10:15:44.416 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:44 np0005548915 nova_compute[254819]: 2025-12-06 10:15:44.430 254824 DEBUG nova.network.neutron [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updated VIF entry in instance network info cache for port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:15:44 np0005548915 nova_compute[254819]: 2025-12-06 10:15:44.431 254824 DEBUG nova.network.neutron [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updating instance_info_cache with network_info: [{"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:15:44 np0005548915 nova_compute[254819]: 2025-12-06 10:15:44.451 254824 DEBUG oslo_concurrency.lockutils [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:15:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:44 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:15:45 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:45 np0005548915 nova_compute[254819]: 2025-12-06 10:15:45.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:15:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:45.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:15:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:45.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:15:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1041: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  6 05:15:46 np0005548915 nova_compute[254819]: 2025-12-06 10:15:46.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:15:46 np0005548915 nova_compute[254819]: 2025-12-06 10:15:46.773 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:15:46 np0005548915 nova_compute[254819]: 2025-12-06 10:15:46.774 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:15:46 np0005548915 nova_compute[254819]: 2025-12-06 10:15:46.774 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:15:46 np0005548915 nova_compute[254819]: 2025-12-06 10:15:46.774 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:15:46 np0005548915 nova_compute[254819]: 2025-12-06 10:15:46.775 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:15:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:46 np0005548915 nova_compute[254819]: 2025-12-06 10:15:46.805 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:46 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:47 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:15:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3803708120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:15:47 np0005548915 nova_compute[254819]: 2025-12-06 10:15:47.266 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:15:47 np0005548915 nova_compute[254819]: 2025-12-06 10:15:47.344 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:15:47 np0005548915 nova_compute[254819]: 2025-12-06 10:15:47.345 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 05:15:47 np0005548915 nova_compute[254819]: 2025-12-06 10:15:47.499 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:15:47 np0005548915 nova_compute[254819]: 2025-12-06 10:15:47.500 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4308MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:15:47 np0005548915 nova_compute[254819]: 2025-12-06 10:15:47.500 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:15:47 np0005548915 nova_compute[254819]: 2025-12-06 10:15:47.501 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:15:47 np0005548915 nova_compute[254819]: 2025-12-06 10:15:47.584 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 7ebb0f0e-b16a-451f-b85a-623f5bcf704f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 05:15:47 np0005548915 nova_compute[254819]: 2025-12-06 10:15:47.585 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:15:47 np0005548915 nova_compute[254819]: 2025-12-06 10:15:47.585 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:15:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:47.669Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:15:47 np0005548915 nova_compute[254819]: 2025-12-06 10:15:47.705 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing inventories for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 05:15:47 np0005548915 nova_compute[254819]: 2025-12-06 10:15:47.764 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating ProviderTree inventory for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 05:15:47 np0005548915 nova_compute[254819]: 2025-12-06 10:15:47.764 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating inventory in ProviderTree for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 05:15:47 np0005548915 nova_compute[254819]: 2025-12-06 10:15:47.780 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing aggregate associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 05:15:47 np0005548915 nova_compute[254819]: 2025-12-06 10:15:47.800 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing trait associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 05:15:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:15:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:47.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:15:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:47.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1042: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  6 05:15:48 np0005548915 nova_compute[254819]: 2025-12-06 10:15:48.142 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:15:48 np0005548915 podman[277535]: 2025-12-06 10:15:48.441911158 +0000 UTC m=+0.075150709 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:15:48 np0005548915 nova_compute[254819]: 2025-12-06 10:15:48.648 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:15:48 np0005548915 nova_compute[254819]: 2025-12-06 10:15:48.652 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:15:48 np0005548915 nova_compute[254819]: 2025-12-06 10:15:48.669 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:15:48 np0005548915 nova_compute[254819]: 2025-12-06 10:15:48.697 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:15:48 np0005548915 nova_compute[254819]: 2025-12-06 10:15:48.698 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:15:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a1c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:48 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:49.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:15:49 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:49 np0005548915 nova_compute[254819]: 2025-12-06 10:15:49.418 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:49 np0005548915 nova_compute[254819]: 2025-12-06 10:15:49.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:15:49 np0005548915 nova_compute[254819]: 2025-12-06 10:15:49.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:15:49 np0005548915 nova_compute[254819]: 2025-12-06 10:15:49.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:15:49 np0005548915 nova_compute[254819]: 2025-12-06 10:15:49.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:15:49 np0005548915 nova_compute[254819]: 2025-12-06 10:15:49.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 05:15:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:15:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:49.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:49 np0005548915 ovn_controller[152417]: 2025-12-06T10:15:49Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:21:72:5e 10.100.0.7
Dec  6 05:15:49 np0005548915 ovn_controller[152417]: 2025-12-06T10:15:49Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:21:72:5e 10.100.0.7
Dec  6 05:15:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1043: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 43 op/s
Dec  6 05:15:50 np0005548915 nova_compute[254819]: 2025-12-06 10:15:50.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:15:50 np0005548915 nova_compute[254819]: 2025-12-06 10:15:50.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:15:50 np0005548915 nova_compute[254819]: 2025-12-06 10:15:50.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:15:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a1e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:50] "GET /metrics HTTP/1.1" 200 48472 "" "Prometheus/2.51.0"
Dec  6 05:15:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:50] "GET /metrics HTTP/1.1" 200 48472 "" "Prometheus/2.51.0"
Dec  6 05:15:51 np0005548915 nova_compute[254819]: 2025-12-06 10:15:51.169 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:15:51 np0005548915 nova_compute[254819]: 2025-12-06 10:15:51.169 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:15:51 np0005548915 nova_compute[254819]: 2025-12-06 10:15:51.169 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 05:15:51 np0005548915 nova_compute[254819]: 2025-12-06 10:15:51.170 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7ebb0f0e-b16a-451f-b85a-623f5bcf704f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:15:51 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:51 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:51 np0005548915 nova_compute[254819]: 2025-12-06 10:15:51.808 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:51.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:51.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1044: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 43 op/s
Dec  6 05:15:52 np0005548915 nova_compute[254819]: 2025-12-06 10:15:52.773 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updating instance_info_cache with network_info: [{"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:15:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:52 np0005548915 nova_compute[254819]: 2025-12-06 10:15:52.797 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:15:52 np0005548915 nova_compute[254819]: 2025-12-06 10:15:52.798 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 05:15:52 np0005548915 nova_compute[254819]: 2025-12-06 10:15:52.799 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:15:52 np0005548915 nova_compute[254819]: 2025-12-06 10:15:52.799 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:15:52 np0005548915 nova_compute[254819]: 2025-12-06 10:15:52.800 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:15:52 np0005548915 nova_compute[254819]: 2025-12-06 10:15:52.800 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:15:52 np0005548915 nova_compute[254819]: 2025-12-06 10:15:52.800 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 05:15:52 np0005548915 nova_compute[254819]: 2025-12-06 10:15:52.817 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 05:15:52 np0005548915 nova_compute[254819]: 2025-12-06 10:15:52.817 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:15:52 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:53 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:53 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:53 np0005548915 nova_compute[254819]: 2025-12-06 10:15:53.820 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:15:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:53.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:15:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:15:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:53.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:15:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:15:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:15:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:15:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:15:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:15:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1045: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Dec  6 05:15:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:54.246 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:15:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:54.247 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:15:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:15:54.247 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:15:54 np0005548915 podman[277594]: 2025-12-06 10:15:54.41852119 +0000 UTC m=+0.068873941 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  6 05:15:54 np0005548915 nova_compute[254819]: 2025-12-06 10:15:54.420 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:54 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:15:55 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:55 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:15:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:55.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:15:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:56.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1046: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  6 05:15:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a220 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:56 np0005548915 nova_compute[254819]: 2025-12-06 10:15:56.811 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:56 np0005548915 nova_compute[254819]: 2025-12-06 10:15:56.853 254824 INFO nova.compute.manager [None req-65ace017-84b7-41ed-9c05-5fd6ce5a20dd 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Get console output#033[00m
Dec  6 05:15:56 np0005548915 nova_compute[254819]: 2025-12-06 10:15:56.861 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 05:15:56 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:57 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:57.670Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:15:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:57.670Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:15:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:57.670Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:15:57 np0005548915 ovn_controller[152417]: 2025-12-06T10:15:57Z|00136|binding|INFO|Releasing lport e6dea8f3-ba9b-4ce4-acbb-0df65f10749a from this chassis (sb_readonly=0)
Dec  6 05:15:57 np0005548915 nova_compute[254819]: 2025-12-06 10:15:57.752 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:57 np0005548915 ovn_controller[152417]: 2025-12-06T10:15:57Z|00137|binding|INFO|Releasing lport e6dea8f3-ba9b-4ce4-acbb-0df65f10749a from this chassis (sb_readonly=0)
Dec  6 05:15:57 np0005548915 nova_compute[254819]: 2025-12-06 10:15:57.836 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:15:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:57.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:15:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:15:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:58.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:15:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1047: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  6 05:15:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:58 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:59.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:15:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:59.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:15:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:59 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004570 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:15:59 np0005548915 nova_compute[254819]: 2025-12-06 10:15:59.423 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:15:59 np0005548915 nova_compute[254819]: 2025-12-06 10:15:59.619 254824 INFO nova.compute.manager [None req-d61050ab-a206-4dcc-80cf-59e60c0415e8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Get console output#033[00m
Dec  6 05:15:59 np0005548915 nova_compute[254819]: 2025-12-06 10:15:59.625 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 05:15:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:15:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:15:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:15:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:59.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:16:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:00.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1048: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  6 05:16:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:00] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec  6 05:16:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:00] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec  6 05:16:01 np0005548915 nova_compute[254819]: 2025-12-06 10:16:01.199 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:01 np0005548915 NetworkManager[48882]: <info>  [1765016161.2015] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Dec  6 05:16:01 np0005548915 NetworkManager[48882]: <info>  [1765016161.2028] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Dec  6 05:16:01 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:01 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:01 np0005548915 nova_compute[254819]: 2025-12-06 10:16:01.293 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:01 np0005548915 ovn_controller[152417]: 2025-12-06T10:16:01Z|00138|binding|INFO|Releasing lport e6dea8f3-ba9b-4ce4-acbb-0df65f10749a from this chassis (sb_readonly=0)
Dec  6 05:16:01 np0005548915 nova_compute[254819]: 2025-12-06 10:16:01.302 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:01.400 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:16:01 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:01.401 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 05:16:01 np0005548915 nova_compute[254819]: 2025-12-06 10:16:01.401 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:01 np0005548915 nova_compute[254819]: 2025-12-06 10:16:01.538 254824 INFO nova.compute.manager [None req-9476a095-577b-45af-b4a7-0c52a7c4c673 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Get console output#033[00m
Dec  6 05:16:01 np0005548915 nova_compute[254819]: 2025-12-06 10:16:01.543 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 05:16:01 np0005548915 nova_compute[254819]: 2025-12-06 10:16:01.813 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:16:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:01.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:16:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:02.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1049: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  6 05:16:02 np0005548915 podman[277747]: 2025-12-06 10:16:02.428325806 +0000 UTC m=+0.066238470 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Dec  6 05:16:02 np0005548915 podman[277747]: 2025-12-06 10:16:02.546863426 +0000 UTC m=+0.184776040 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:16:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:02 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:03 np0005548915 podman[277864]: 2025-12-06 10:16:03.12327672 +0000 UTC m=+0.060866885 container exec 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:16:03 np0005548915 podman[277864]: 2025-12-06 10:16:03.13475105 +0000 UTC m=+0.072341215 container exec_died 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:16:03 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:03 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:03 np0005548915 podman[277956]: 2025-12-06 10:16:03.455212153 +0000 UTC m=+0.059588941 container exec c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Dec  6 05:16:03 np0005548915 podman[277956]: 2025-12-06 10:16:03.476770144 +0000 UTC m=+0.081146932 container exec_died c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.624 254824 DEBUG nova.compute.manager [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-changed-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.624 254824 DEBUG nova.compute.manager [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Refreshing instance network info cache due to event network-changed-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.624 254824 DEBUG oslo_concurrency.lockutils [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.625 254824 DEBUG oslo_concurrency.lockutils [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.625 254824 DEBUG nova.network.neutron [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Refreshing network info cache for port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.683 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.684 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.684 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.685 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.685 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.686 254824 INFO nova.compute.manager [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Terminating instance#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.687 254824 DEBUG nova.compute.manager [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 05:16:03 np0005548915 podman[278018]: 2025-12-06 10:16:03.704246977 +0000 UTC m=+0.056004113 container exec 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 05:16:03 np0005548915 podman[278018]: 2025-12-06 10:16:03.717815893 +0000 UTC m=+0.069573019 container exec_died 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 05:16:03 np0005548915 kernel: tapea0f2c61-7d (unregistering): left promiscuous mode
Dec  6 05:16:03 np0005548915 NetworkManager[48882]: <info>  [1765016163.7541] device (tapea0f2c61-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 05:16:03 np0005548915 ovn_controller[152417]: 2025-12-06T10:16:03Z|00139|binding|INFO|Releasing lport ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd from this chassis (sb_readonly=0)
Dec  6 05:16:03 np0005548915 ovn_controller[152417]: 2025-12-06T10:16:03Z|00140|binding|INFO|Setting lport ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd down in Southbound
Dec  6 05:16:03 np0005548915 ovn_controller[152417]: 2025-12-06T10:16:03Z|00141|binding|INFO|Removing iface tapea0f2c61-7d ovn-installed in OVS
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.768 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:03.775 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:72:5e 10.100.0.7'], port_security=['fa:16:3e:21:72:5e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7ebb0f0e-b16a-451f-b85a-623f5bcf704f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d5ca921-3bfd-449d-8b5d-30ae22ce26cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81c5896f-af1e-41c2-8dce-fe719e73d950, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:16:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:03.776 162267 INFO neutron.agent.ovn.metadata.agent [-] Port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd in datapath f19a420c-d088-44ba-92a5-ba4d8025ce6c unbound from our chassis#033[00m
Dec  6 05:16:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:03.778 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f19a420c-d088-44ba-92a5-ba4d8025ce6c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 05:16:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:03.779 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a910af-f461-47cb-93cf-ff33bfd96e6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:16:03 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:03.779 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c namespace which is not needed anymore#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.794 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:03 np0005548915 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec  6 05:16:03 np0005548915 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000d.scope: Consumed 14.207s CPU time.
Dec  6 05:16:03 np0005548915 systemd-machined[216202]: Machine qemu-9-instance-0000000d terminated.
Dec  6 05:16:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:03.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:03 np0005548915 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [NOTICE]   (277403) : haproxy version is 2.8.14-c23fe91
Dec  6 05:16:03 np0005548915 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [NOTICE]   (277403) : path to executable is /usr/sbin/haproxy
Dec  6 05:16:03 np0005548915 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [WARNING]  (277403) : Exiting Master process...
Dec  6 05:16:03 np0005548915 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [WARNING]  (277403) : Exiting Master process...
Dec  6 05:16:03 np0005548915 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [ALERT]    (277403) : Current worker (277405) exited with code 143 (Terminated)
Dec  6 05:16:03 np0005548915 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [WARNING]  (277403) : All workers exited. Exiting... (0)
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.913 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:03 np0005548915 systemd[1]: libpod-54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92.scope: Deactivated successfully.
Dec  6 05:16:03 np0005548915 podman[278104]: 2025-12-06 10:16:03.921323268 +0000 UTC m=+0.052377766 container died 54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.924 254824 INFO nova.virt.libvirt.driver [-] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Instance destroyed successfully.#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.925 254824 DEBUG nova.objects.instance [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 7ebb0f0e-b16a-451f-b85a-623f5bcf704f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 05:16:03 np0005548915 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92-userdata-shm.mount: Deactivated successfully.
Dec  6 05:16:03 np0005548915 podman[278115]: 2025-12-06 10:16:03.950706462 +0000 UTC m=+0.070035872 container exec d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, io.openshift.expose-services=, release=1793, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2)
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.951 254824 DEBUG nova.virt.libvirt.vif [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:15:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1823850228',display_name='tempest-TestNetworkBasicOps-server-1823850228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1823850228',id=13,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZ1JbYqKoCUxIiM8hDMgdZSsRHQUcoBjRF2DOasdBtdUJsR/+RRaag7cOntBUu6Pnxm7ZLVxvld0ACRX3Mi2/RpeAQ5OWV7PuIX+IEnS95lS5yg27/v0AunJEPN78t9BQ==',key_name='tempest-TestNetworkBasicOps-539349224',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:15:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-tzfybp9r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:15:37Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=7ebb0f0e-b16a-451f-b85a-623f5bcf704f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.952 254824 DEBUG nova.network.os_vif_util [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.953 254824 DEBUG nova.network.os_vif_util [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:21:72:5e,bridge_name='br-int',has_traffic_filtering=True,id=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd,network=Network(f19a420c-d088-44ba-92a5-ba4d8025ce6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea0f2c61-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.953 254824 DEBUG os_vif [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:72:5e,bridge_name='br-int',has_traffic_filtering=True,id=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd,network=Network(f19a420c-d088-44ba-92a5-ba4d8025ce6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea0f2c61-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 05:16:03 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7bbfc0c4f48e5f7f64a74644fc0e70facc48d03f5e9acb49b25622ca059f294b-merged.mount: Deactivated successfully.
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.955 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.955 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0f2c61-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.956 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.958 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:03 np0005548915 nova_compute[254819]: 2025-12-06 10:16:03.960 254824 INFO os_vif [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:72:5e,bridge_name='br-int',has_traffic_filtering=True,id=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd,network=Network(f19a420c-d088-44ba-92a5-ba4d8025ce6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea0f2c61-7d')#033[00m
Dec  6 05:16:03 np0005548915 podman[278104]: 2025-12-06 10:16:03.963394424 +0000 UTC m=+0.094448932 container cleanup 54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:16:03 np0005548915 podman[278115]: 2025-12-06 10:16:03.964043131 +0000 UTC m=+0.083372511 container exec_died d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, release=1793, io.buildah.version=1.28.2, name=keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, architecture=x86_64, io.openshift.tags=Ceph keepalived)
Dec  6 05:16:03 np0005548915 systemd[1]: libpod-conmon-54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92.scope: Deactivated successfully.
Dec  6 05:16:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:04.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.053 254824 DEBUG nova.compute.manager [req-e0092066-0c91-41af-ac43-4c2dd58fe130 req-6aba86a9-e1da-4a7d-b05d-0252ffc48e3a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-vif-unplugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.053 254824 DEBUG oslo_concurrency.lockutils [req-e0092066-0c91-41af-ac43-4c2dd58fe130 req-6aba86a9-e1da-4a7d-b05d-0252ffc48e3a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.053 254824 DEBUG oslo_concurrency.lockutils [req-e0092066-0c91-41af-ac43-4c2dd58fe130 req-6aba86a9-e1da-4a7d-b05d-0252ffc48e3a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.053 254824 DEBUG oslo_concurrency.lockutils [req-e0092066-0c91-41af-ac43-4c2dd58fe130 req-6aba86a9-e1da-4a7d-b05d-0252ffc48e3a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.054 254824 DEBUG nova.compute.manager [req-e0092066-0c91-41af-ac43-4c2dd58fe130 req-6aba86a9-e1da-4a7d-b05d-0252ffc48e3a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] No waiting events found dispatching network-vif-unplugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.054 254824 DEBUG nova.compute.manager [req-e0092066-0c91-41af-ac43-4c2dd58fe130 req-6aba86a9-e1da-4a7d-b05d-0252ffc48e3a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-vif-unplugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 05:16:04 np0005548915 podman[278188]: 2025-12-06 10:16:04.063434655 +0000 UTC m=+0.060795603 container remove 54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 05:16:04 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.070 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1f3163-2ab9-4d84-a5a0-b98c4444a711]: (4, ('Sat Dec  6 10:16:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c (54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92)\n54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92\nSat Dec  6 10:16:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c (54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92)\n54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:16:04 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.072 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[bd0fb779-6a5d-48be-95fd-a4a2244bafcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:16:04 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.073 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf19a420c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.075 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:04 np0005548915 kernel: tapf19a420c-d0: left promiscuous mode
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.096 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:04 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.100 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[27757000-a321-4b3f-8fbf-f625bce64864]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:16:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1050: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  6 05:16:04 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.117 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4cbad1-8d28-40e4-9f99-5ad15e715470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:16:04 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.119 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5fdabb-a24c-4599-8c73-c0ba90f86519]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:16:04 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.141 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9a154c-158f-4938-b050-7e1a5e8db29e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450603, 'reachable_time': 34911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278243, 'error': None, 'target': 'ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:16:04 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.144 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 05:16:04 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.144 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2f0259-a801-4609-a1ca-164ac4ba1076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 05:16:04 np0005548915 systemd[1]: run-netns-ovnmeta\x2df19a420c\x2dd088\x2d44ba\x2d92a5\x2dba4d8025ce6c.mount: Deactivated successfully.
Dec  6 05:16:04 np0005548915 podman[278246]: 2025-12-06 10:16:04.211301368 +0000 UTC m=+0.050525325 container exec b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:16:04 np0005548915 podman[278246]: 2025-12-06 10:16:04.241819442 +0000 UTC m=+0.081043359 container exec_died b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.363 254824 INFO nova.virt.libvirt.driver [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Deleting instance files /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f_del#033[00m
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.363 254824 INFO nova.virt.libvirt.driver [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Deletion of /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f_del complete#033[00m
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.416 254824 INFO nova.compute.manager [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.417 254824 DEBUG oslo.service.loopingcall [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.417 254824 DEBUG nova.compute.manager [-] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.417 254824 DEBUG nova.network.neutron [-] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 05:16:04 np0005548915 podman[278321]: 2025-12-06 10:16:04.493857036 +0000 UTC m=+0.073068204 container exec fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 05:16:04 np0005548915 podman[278321]: 2025-12-06 10:16:04.690963169 +0000 UTC m=+0.270174317 container exec_died fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 05:16:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004570 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:16:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004570 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.918 254824 DEBUG nova.network.neutron [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updated VIF entry in instance network info cache for port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.918 254824 DEBUG nova.network.neutron [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updating instance_info_cache with network_info: [{"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:16:04 np0005548915 nova_compute[254819]: 2025-12-06 10:16:04.939 254824 DEBUG oslo_concurrency.lockutils [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 05:16:05 np0005548915 podman[278432]: 2025-12-06 10:16:05.076810887 +0000 UTC m=+0.049854557 container exec cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:16:05 np0005548915 podman[278432]: 2025-12-06 10:16:05.108592195 +0000 UTC m=+0.081635875 container exec_died cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:16:05 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:05 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004570 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:05 np0005548915 nova_compute[254819]: 2025-12-06 10:16:05.464 254824 DEBUG nova.network.neutron [-] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 05:16:05 np0005548915 nova_compute[254819]: 2025-12-06 10:16:05.487 254824 INFO nova.compute.manager [-] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Took 1.07 seconds to deallocate network for instance.#033[00m
Dec  6 05:16:05 np0005548915 nova_compute[254819]: 2025-12-06 10:16:05.551 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:16:05 np0005548915 nova_compute[254819]: 2025-12-06 10:16:05.552 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:16:05 np0005548915 nova_compute[254819]: 2025-12-06 10:16:05.624 254824 DEBUG oslo_concurrency.processutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:16:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1051: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 955 B/s rd, 15 KiB/s wr, 1 op/s
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:16:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:05.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:16:05 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:16:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:16:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:06.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:16:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:16:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:16:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:16:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:16:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:16:06 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:16:06 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:16:06 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2500206167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:16:06 np0005548915 nova_compute[254819]: 2025-12-06 10:16:06.141 254824 DEBUG oslo_concurrency.processutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:16:06 np0005548915 nova_compute[254819]: 2025-12-06 10:16:06.149 254824 DEBUG nova.compute.manager [req-3f90431e-1af9-41eb-ab96-b3765e36b5bb req-9a22847b-31cd-4a3b-bf9d-d975cc2661c9 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:16:06 np0005548915 nova_compute[254819]: 2025-12-06 10:16:06.149 254824 DEBUG oslo_concurrency.lockutils [req-3f90431e-1af9-41eb-ab96-b3765e36b5bb req-9a22847b-31cd-4a3b-bf9d-d975cc2661c9 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:16:06 np0005548915 nova_compute[254819]: 2025-12-06 10:16:06.150 254824 DEBUG oslo_concurrency.lockutils [req-3f90431e-1af9-41eb-ab96-b3765e36b5bb req-9a22847b-31cd-4a3b-bf9d-d975cc2661c9 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:16:06 np0005548915 nova_compute[254819]: 2025-12-06 10:16:06.150 254824 DEBUG oslo_concurrency.lockutils [req-3f90431e-1af9-41eb-ab96-b3765e36b5bb req-9a22847b-31cd-4a3b-bf9d-d975cc2661c9 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:16:06 np0005548915 nova_compute[254819]: 2025-12-06 10:16:06.150 254824 DEBUG nova.compute.manager [req-3f90431e-1af9-41eb-ab96-b3765e36b5bb req-9a22847b-31cd-4a3b-bf9d-d975cc2661c9 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] No waiting events found dispatching network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 05:16:06 np0005548915 nova_compute[254819]: 2025-12-06 10:16:06.151 254824 WARNING nova.compute.manager [req-3f90431e-1af9-41eb-ab96-b3765e36b5bb req-9a22847b-31cd-4a3b-bf9d-d975cc2661c9 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received unexpected event network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd for instance with vm_state deleted and task_state None.#033[00m
Dec  6 05:16:06 np0005548915 nova_compute[254819]: 2025-12-06 10:16:06.151 254824 DEBUG nova.compute.manager [req-3f90431e-1af9-41eb-ab96-b3765e36b5bb req-9a22847b-31cd-4a3b-bf9d-d975cc2661c9 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-vif-deleted-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 05:16:06 np0005548915 nova_compute[254819]: 2025-12-06 10:16:06.155 254824 DEBUG nova.compute.provider_tree [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:16:06 np0005548915 nova_compute[254819]: 2025-12-06 10:16:06.170 254824 DEBUG nova.scheduler.client.report [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:16:06 np0005548915 nova_compute[254819]: 2025-12-06 10:16:06.192 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:16:06 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Dec  6 05:16:06 np0005548915 ceph-mon[74327]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  6 05:16:06 np0005548915 nova_compute[254819]: 2025-12-06 10:16:06.216 254824 INFO nova.scheduler.client.report [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 7ebb0f0e-b16a-451f-b85a-623f5bcf704f#033[00m
Dec  6 05:16:06 np0005548915 nova_compute[254819]: 2025-12-06 10:16:06.287 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:16:06 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:06.403 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:16:06 np0005548915 podman[278667]: 2025-12-06 10:16:06.469562852 +0000 UTC m=+0.048152711 container create e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  6 05:16:06 np0005548915 systemd[1]: Started libpod-conmon-e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73.scope.
Dec  6 05:16:06 np0005548915 podman[278667]: 2025-12-06 10:16:06.44801763 +0000 UTC m=+0.026607499 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:16:06 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:16:06 np0005548915 podman[278667]: 2025-12-06 10:16:06.579948782 +0000 UTC m=+0.158538621 container init e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec  6 05:16:06 np0005548915 podman[278667]: 2025-12-06 10:16:06.586462378 +0000 UTC m=+0.165052197 container start e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:16:06 np0005548915 podman[278667]: 2025-12-06 10:16:06.590823596 +0000 UTC m=+0.169413425 container attach e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:16:06 np0005548915 dazzling_goldberg[278683]: 167 167
Dec  6 05:16:06 np0005548915 systemd[1]: libpod-e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73.scope: Deactivated successfully.
Dec  6 05:16:06 np0005548915 conmon[278683]: conmon e0c73b0217ba6354e8c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73.scope/container/memory.events
Dec  6 05:16:06 np0005548915 podman[278667]: 2025-12-06 10:16:06.592785809 +0000 UTC m=+0.171375638 container died e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  6 05:16:06 np0005548915 systemd[1]: var-lib-containers-storage-overlay-131f442556f504e84c52a494bb34c8569a6eba2b00f95abf9030fc9221635666-merged.mount: Deactivated successfully.
Dec  6 05:16:06 np0005548915 podman[278667]: 2025-12-06 10:16:06.626794247 +0000 UTC m=+0.205384076 container remove e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  6 05:16:06 np0005548915 systemd[1]: libpod-conmon-e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73.scope: Deactivated successfully.
Dec  6 05:16:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:06 np0005548915 nova_compute[254819]: 2025-12-06 10:16:06.815 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:06 np0005548915 podman[278707]: 2025-12-06 10:16:06.843648572 +0000 UTC m=+0.064735429 container create a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  6 05:16:06 np0005548915 systemd[1]: Started libpod-conmon-a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6.scope.
Dec  6 05:16:06 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:06 np0005548915 podman[278707]: 2025-12-06 10:16:06.823026696 +0000 UTC m=+0.044113603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:16:06 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:16:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fcef08b60babb8746615fc316bac9c8c1fdad8abf1e7366e4f967e0b750745/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:16:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fcef08b60babb8746615fc316bac9c8c1fdad8abf1e7366e4f967e0b750745/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:16:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fcef08b60babb8746615fc316bac9c8c1fdad8abf1e7366e4f967e0b750745/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:16:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fcef08b60babb8746615fc316bac9c8c1fdad8abf1e7366e4f967e0b750745/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:16:06 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fcef08b60babb8746615fc316bac9c8c1fdad8abf1e7366e4f967e0b750745/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:16:06 np0005548915 podman[278707]: 2025-12-06 10:16:06.950187999 +0000 UTC m=+0.171274876 container init a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_thompson, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  6 05:16:06 np0005548915 podman[278707]: 2025-12-06 10:16:06.958279227 +0000 UTC m=+0.179366084 container start a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:16:06 np0005548915 podman[278707]: 2025-12-06 10:16:06.961314669 +0000 UTC m=+0.182401526 container attach a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_thompson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:16:07 np0005548915 ceph-mon[74327]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Dec  6 05:16:07 np0005548915 ceph-mon[74327]: Cluster is now healthy
Dec  6 05:16:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:07 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:07 np0005548915 sharp_thompson[278723]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:16:07 np0005548915 sharp_thompson[278723]: --> All data devices are unavailable
Dec  6 05:16:07 np0005548915 systemd[1]: libpod-a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6.scope: Deactivated successfully.
Dec  6 05:16:07 np0005548915 podman[278707]: 2025-12-06 10:16:07.378193775 +0000 UTC m=+0.599280672 container died a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_thompson, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 05:16:07 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d3fcef08b60babb8746615fc316bac9c8c1fdad8abf1e7366e4f967e0b750745-merged.mount: Deactivated successfully.
Dec  6 05:16:07 np0005548915 podman[278707]: 2025-12-06 10:16:07.446382216 +0000 UTC m=+0.667469113 container remove a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:16:07 np0005548915 systemd[1]: libpod-conmon-a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6.scope: Deactivated successfully.
Dec  6 05:16:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:07.670Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:16:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:07.670Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:16:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:07.671Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:16:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1052: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 16 KiB/s wr, 29 op/s
Dec  6 05:16:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:16:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:07.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:16:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:08.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:08 np0005548915 podman[278843]: 2025-12-06 10:16:08.123805977 +0000 UTC m=+0.037595586 container create f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 05:16:08 np0005548915 systemd[1]: Started libpod-conmon-f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f.scope.
Dec  6 05:16:08 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:16:08 np0005548915 podman[278843]: 2025-12-06 10:16:08.19390516 +0000 UTC m=+0.107694789 container init f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 05:16:08 np0005548915 podman[278843]: 2025-12-06 10:16:08.199400298 +0000 UTC m=+0.113189947 container start f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:16:08 np0005548915 podman[278843]: 2025-12-06 10:16:08.107359183 +0000 UTC m=+0.021148812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:16:08 np0005548915 cranky_ritchie[278859]: 167 167
Dec  6 05:16:08 np0005548915 systemd[1]: libpod-f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f.scope: Deactivated successfully.
Dec  6 05:16:08 np0005548915 podman[278843]: 2025-12-06 10:16:08.203919661 +0000 UTC m=+0.117709300 container attach f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:16:08 np0005548915 podman[278843]: 2025-12-06 10:16:08.204944428 +0000 UTC m=+0.118734067 container died f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:16:08 np0005548915 systemd[1]: var-lib-containers-storage-overlay-cb4bab2923bf159cbe9f772573096ccdc599d3b88aa2cd8d2aca0544bf937eb2-merged.mount: Deactivated successfully.
Dec  6 05:16:08 np0005548915 podman[278843]: 2025-12-06 10:16:08.253078257 +0000 UTC m=+0.166867896 container remove f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:16:08 np0005548915 systemd[1]: libpod-conmon-f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f.scope: Deactivated successfully.
Dec  6 05:16:08 np0005548915 podman[278885]: 2025-12-06 10:16:08.443940682 +0000 UTC m=+0.049253182 container create 363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  6 05:16:08 np0005548915 systemd[1]: Started libpod-conmon-363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c.scope.
Dec  6 05:16:08 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:16:08 np0005548915 podman[278885]: 2025-12-06 10:16:08.421162286 +0000 UTC m=+0.026474786 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:16:08 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26641ed579d39eead0609cb8293b3ac108b8718717087e943b337e9160f05de0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:16:08 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26641ed579d39eead0609cb8293b3ac108b8718717087e943b337e9160f05de0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:16:08 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26641ed579d39eead0609cb8293b3ac108b8718717087e943b337e9160f05de0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:16:08 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26641ed579d39eead0609cb8293b3ac108b8718717087e943b337e9160f05de0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:16:08 np0005548915 podman[278885]: 2025-12-06 10:16:08.541156186 +0000 UTC m=+0.146468736 container init 363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:16:08 np0005548915 podman[278885]: 2025-12-06 10:16:08.554681091 +0000 UTC m=+0.159993561 container start 363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  6 05:16:08 np0005548915 podman[278885]: 2025-12-06 10:16:08.558683159 +0000 UTC m=+0.163995649 container attach 363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ritchie, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  6 05:16:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004570 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]: {
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:    "1": [
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:        {
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:            "devices": [
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:                "/dev/loop3"
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:            ],
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:            "lv_name": "ceph_lv0",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:            "lv_size": "21470642176",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:            "name": "ceph_lv0",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:            "tags": {
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:                "ceph.cluster_name": "ceph",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:                "ceph.crush_device_class": "",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:                "ceph.encrypted": "0",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:                "ceph.osd_id": "1",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:                "ceph.type": "block",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:                "ceph.vdo": "0",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:                "ceph.with_tpm": "0"
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:            },
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:            "type": "block",
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:            "vg_name": "ceph_vg0"
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:        }
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]:    ]
Dec  6 05:16:08 np0005548915 condescending_ritchie[278901]: }
Dec  6 05:16:08 np0005548915 systemd[1]: libpod-363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c.scope: Deactivated successfully.
Dec  6 05:16:08 np0005548915 podman[278885]: 2025-12-06 10:16:08.859693127 +0000 UTC m=+0.465005587 container died 363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ritchie, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  6 05:16:08 np0005548915 systemd[1]: var-lib-containers-storage-overlay-26641ed579d39eead0609cb8293b3ac108b8718717087e943b337e9160f05de0-merged.mount: Deactivated successfully.
Dec  6 05:16:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a2a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:08 np0005548915 podman[278885]: 2025-12-06 10:16:08.908454573 +0000 UTC m=+0.513767033 container remove 363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ritchie, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  6 05:16:08 np0005548915 systemd[1]: libpod-conmon-363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c.scope: Deactivated successfully.
Dec  6 05:16:08 np0005548915 nova_compute[254819]: 2025-12-06 10:16:08.959 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec  6 05:16:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:16:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:16:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:16:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:09.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:16:09 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:09 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:09 np0005548915 podman[279015]: 2025-12-06 10:16:09.605956956 +0000 UTC m=+0.055983993 container create bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  6 05:16:09 np0005548915 systemd[1]: Started libpod-conmon-bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d.scope.
Dec  6 05:16:09 np0005548915 podman[279015]: 2025-12-06 10:16:09.581837385 +0000 UTC m=+0.031864462 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:16:09 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:16:09 np0005548915 podman[279015]: 2025-12-06 10:16:09.713615413 +0000 UTC m=+0.163642460 container init bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  6 05:16:09 np0005548915 podman[279015]: 2025-12-06 10:16:09.722045891 +0000 UTC m=+0.172072918 container start bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:16:09 np0005548915 podman[279015]: 2025-12-06 10:16:09.726274295 +0000 UTC m=+0.176301362 container attach bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:16:09 np0005548915 laughing_clarke[279032]: 167 167
Dec  6 05:16:09 np0005548915 systemd[1]: libpod-bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d.scope: Deactivated successfully.
Dec  6 05:16:09 np0005548915 podman[279015]: 2025-12-06 10:16:09.727838997 +0000 UTC m=+0.177866024 container died bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  6 05:16:09 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1845a5fc4d02dca4b6ffe195de996eef876f722785f7f6b7e63fb7b7958bf7ab-merged.mount: Deactivated successfully.
Dec  6 05:16:09 np0005548915 podman[279015]: 2025-12-06 10:16:09.76385594 +0000 UTC m=+0.213882967 container remove bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:16:09 np0005548915 systemd[1]: libpod-conmon-bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d.scope: Deactivated successfully.
Dec  6 05:16:09 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1053: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec  6 05:16:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:16:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:09.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:09 np0005548915 podman[279056]: 2025-12-06 10:16:09.97311772 +0000 UTC m=+0.065756076 container create 1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 05:16:09 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:16:10 np0005548915 systemd[1]: Started libpod-conmon-1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50.scope.
Dec  6 05:16:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:10.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:10 np0005548915 podman[279056]: 2025-12-06 10:16:09.946228844 +0000 UTC m=+0.038867240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:16:10 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:16:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e6327b19b91d20332492492173a0337df1307e013919741a4225c764ba454d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:16:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e6327b19b91d20332492492173a0337df1307e013919741a4225c764ba454d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:16:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e6327b19b91d20332492492173a0337df1307e013919741a4225c764ba454d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:16:10 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e6327b19b91d20332492492173a0337df1307e013919741a4225c764ba454d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:16:10 np0005548915 podman[279056]: 2025-12-06 10:16:10.082566215 +0000 UTC m=+0.175204581 container init 1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Dec  6 05:16:10 np0005548915 podman[279056]: 2025-12-06 10:16:10.094448886 +0000 UTC m=+0.187087242 container start 1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_johnson, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec  6 05:16:10 np0005548915 podman[279056]: 2025-12-06 10:16:10.106586424 +0000 UTC m=+0.199224790 container attach 1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_johnson, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 05:16:10 np0005548915 lvm[279146]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:16:10 np0005548915 lvm[279146]: VG ceph_vg0 finished
Dec  6 05:16:10 np0005548915 lvm[279150]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:16:10 np0005548915 lvm[279150]: VG ceph_vg0 finished
Dec  6 05:16:10 np0005548915 competent_johnson[279072]: {}
Dec  6 05:16:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:10 np0005548915 systemd[1]: libpod-1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50.scope: Deactivated successfully.
Dec  6 05:16:10 np0005548915 systemd[1]: libpod-1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50.scope: Consumed 1.083s CPU time.
Dec  6 05:16:10 np0005548915 podman[279056]: 2025-12-06 10:16:10.813337716 +0000 UTC m=+0.905976062 container died 1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  6 05:16:10 np0005548915 systemd[1]: var-lib-containers-storage-overlay-2e6327b19b91d20332492492173a0337df1307e013919741a4225c764ba454d3-merged.mount: Deactivated successfully.
Dec  6 05:16:10 np0005548915 podman[279056]: 2025-12-06 10:16:10.851429945 +0000 UTC m=+0.944068291 container remove 1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_johnson, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:16:10 np0005548915 systemd[1]: libpod-conmon-1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50.scope: Deactivated successfully.
Dec  6 05:16:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:16:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:16:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:16:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004570 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:16:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:16:10 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:16:11 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:16:11 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:16:11 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:11 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:11 np0005548915 nova_compute[254819]: 2025-12-06 10:16:11.816 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1054: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec  6 05:16:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:11.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:12 np0005548915 nova_compute[254819]: 2025-12-06 10:16:12.006 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:12.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:12 np0005548915 nova_compute[254819]: 2025-12-06 10:16:12.113 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:12 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:13 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:13 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1055: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec  6 05:16:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:13.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:13 np0005548915 nova_compute[254819]: 2025-12-06 10:16:13.963 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:14.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:14 np0005548915 podman[279192]: 2025-12-06 10:16:14.482944178 +0000 UTC m=+0.104545633 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 05:16:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4001a00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:16:14 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:15 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:15 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1056: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  6 05:16:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:15.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:16 np0005548915 nova_compute[254819]: 2025-12-06 10:16:16.819 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:16 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4001a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:17 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101617 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:16:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:17.672Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:16:17 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1057: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  6 05:16:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:17.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:18.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40045d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:18 np0005548915 nova_compute[254819]: 2025-12-06 10:16:18.923 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765016163.9219046, 7ebb0f0e-b16a-451f-b85a-623f5bcf704f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 05:16:18 np0005548915 nova_compute[254819]: 2025-12-06 10:16:18.923 254824 INFO nova.compute.manager [-] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] VM Stopped (Lifecycle Event)#033[00m
Dec  6 05:16:18 np0005548915 nova_compute[254819]: 2025-12-06 10:16:18.952 254824 DEBUG nova.compute.manager [None req-375c1389-ab3f-4408-b044-77d18aba20c6 - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 05:16:18 np0005548915 nova_compute[254819]: 2025-12-06 10:16:18.966 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:19.047Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:16:19 np0005548915 ceph-mgr[74618]: [dashboard INFO request] [192.168.122.100:51930] [POST] [200] [0.003s] [4.0B] [acb5433b-d0b7-408c-abb1-d799d504c557] /api/prometheus_receiver
Dec  6 05:16:19 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c560 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:19 np0005548915 podman[279241]: 2025-12-06 10:16:19.517881156 +0000 UTC m=+0.134666928 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 05:16:19 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:16:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:16:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:19.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:20.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:20] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:16:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:20] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:16:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:21 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:21 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40045f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:21 np0005548915 nova_compute[254819]: 2025-12-06 10:16:21.821 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:21 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:16:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:16:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:21.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:16:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:22.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c560 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:22 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:23 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:23 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:23 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  6 05:16:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:23.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:16:23
Dec  6 05:16:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:16:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:16:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['volumes', '.rgw.root', '.nfs', 'images', 'default.rgw.meta', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups']
Dec  6 05:16:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:16:23 np0005548915 nova_compute[254819]: 2025-12-06 10:16:23.970 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:16:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:16:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:24.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:16:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:16:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004610 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:16:24 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c560 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:25 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:25 np0005548915 podman[279273]: 2025-12-06 10:16:25.479022399 +0000 UTC m=+0.101349947 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:16:25 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 05:16:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:25.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:26.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:26 np0005548915 nova_compute[254819]: 2025-12-06 10:16:26.882 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:26 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:27 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c560 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:27.673Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:16:27 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:16:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:16:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:27.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:16:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:28.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004750 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:28 np0005548915 nova_compute[254819]: 2025-12-06 10:16:28.973 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:29 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:29 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:29 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 05:16:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:16:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:16:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:29.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:16:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:30.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:30] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:16:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:30] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:16:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:31 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:31 np0005548915 nova_compute[254819]: 2025-12-06 10:16:31.885 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:31 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 05:16:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:16:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:31.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:16:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:32.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c560 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:32 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:33 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:33 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:33 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:16:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:33.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:33 np0005548915 nova_compute[254819]: 2025-12-06 10:16:33.976 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:34.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:16:34 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c560 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  6 05:16:35 np0005548915 kernel: ganesha.nfsd[277147]: segfault at 50 ip 00007f66bab6c32e sp 00007f66727fb210 error 4 in libntirpc.so.5.8[7f66bab51000+2c000] likely on CPU 1 (core 0, socket 1)
Dec  6 05:16:35 np0005548915 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  6 05:16:35 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:35 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c560 fd 39 proxy ignored for local
Dec  6 05:16:35 np0005548915 systemd[1]: Started Process Core Dump (PID 279328/UID 0).
Dec  6 05:16:35 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 05:16:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:16:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:35.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:16:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:36.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:36 np0005548915 systemd-coredump[279329]: Process 267051 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 81:#012#0  0x00007f66bab6c32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  6 05:16:36 np0005548915 systemd[1]: systemd-coredump@11-279328-0.service: Deactivated successfully.
Dec  6 05:16:36 np0005548915 systemd[1]: systemd-coredump@11-279328-0.service: Consumed 1.148s CPU time.
Dec  6 05:16:36 np0005548915 podman[279336]: 2025-12-06 10:16:36.696240423 +0000 UTC m=+0.022492868 container died c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  6 05:16:36 np0005548915 systemd[1]: var-lib-containers-storage-overlay-1734ccd679f2dc6c6c68ccfec5ec524b9e349d18b823990645a69f0aafaa48d8-merged.mount: Deactivated successfully.
Dec  6 05:16:36 np0005548915 podman[279336]: 2025-12-06 10:16:36.735427611 +0000 UTC m=+0.061680036 container remove c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  6 05:16:36 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec  6 05:16:36 np0005548915 nova_compute[254819]: 2025-12-06 10:16:36.887 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:36 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec  6 05:16:36 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.397s CPU time.
Dec  6 05:16:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:37.674Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:16:37 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  6 05:16:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:37.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:38.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:16:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:16:38 np0005548915 nova_compute[254819]: 2025-12-06 10:16:38.980 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:39 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1068: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 05:16:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:16:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:39.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:40.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:40] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec  6 05:16:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:40] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec  6 05:16:41 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101641 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:16:41 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1069: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 05:16:41 np0005548915 nova_compute[254819]: 2025-12-06 10:16:41.935 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:41.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:42.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:42 np0005548915 ovn_controller[152417]: 2025-12-06T10:16:42Z|00142|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  6 05:16:43 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1070: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 05:16:43 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:43 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:16:43 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:43.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:16:43 np0005548915 nova_compute[254819]: 2025-12-06 10:16:43.984 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:44.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:16:45 np0005548915 podman[279389]: 2025-12-06 10:16:45.656963597 +0000 UTC m=+0.273866805 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec  6 05:16:45 np0005548915 nova_compute[254819]: 2025-12-06 10:16:45.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:16:45 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1071: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  6 05:16:45 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:45 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  6 05:16:45 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:45.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  6 05:16:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  6 05:16:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3526346481' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 05:16:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  6 05:16:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3526346481' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 05:16:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:46.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:46 np0005548915 nova_compute[254819]: 2025-12-06 10:16:46.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:16:46 np0005548915 nova_compute[254819]: 2025-12-06 10:16:46.781 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:16:46 np0005548915 nova_compute[254819]: 2025-12-06 10:16:46.782 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:16:46 np0005548915 nova_compute[254819]: 2025-12-06 10:16:46.782 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:16:46 np0005548915 nova_compute[254819]: 2025-12-06 10:16:46.782 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:16:46 np0005548915 nova_compute[254819]: 2025-12-06 10:16:46.783 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:16:46 np0005548915 nova_compute[254819]: 2025-12-06 10:16:46.976 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:47 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 12.
Dec  6 05:16:47 np0005548915 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 05:16:47 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.397s CPU time.
Dec  6 05:16:47 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Start request repeated too quickly.
Dec  6 05:16:47 np0005548915 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec  6 05:16:47 np0005548915 systemd[1]: Failed to start Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec  6 05:16:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:16:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3098041809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:16:47 np0005548915 nova_compute[254819]: 2025-12-06 10:16:47.339 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:16:47 np0005548915 nova_compute[254819]: 2025-12-06 10:16:47.538 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:16:47 np0005548915 nova_compute[254819]: 2025-12-06 10:16:47.539 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4516MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:16:47 np0005548915 nova_compute[254819]: 2025-12-06 10:16:47.540 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:16:47 np0005548915 nova_compute[254819]: 2025-12-06 10:16:47.540 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:16:47 np0005548915 nova_compute[254819]: 2025-12-06 10:16:47.606 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:16:47 np0005548915 nova_compute[254819]: 2025-12-06 10:16:47.606 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:16:47 np0005548915 nova_compute[254819]: 2025-12-06 10:16:47.621 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:16:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:47.675Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:16:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1072: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 05:16:47 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:47 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:16:47 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:47.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:16:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:16:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3494624022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:16:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:48.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:48 np0005548915 nova_compute[254819]: 2025-12-06 10:16:48.092 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:16:48 np0005548915 nova_compute[254819]: 2025-12-06 10:16:48.099 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:16:48 np0005548915 nova_compute[254819]: 2025-12-06 10:16:48.114 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:16:48 np0005548915 nova_compute[254819]: 2025-12-06 10:16:48.139 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:16:48 np0005548915 nova_compute[254819]: 2025-12-06 10:16:48.139 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:16:48 np0005548915 nova_compute[254819]: 2025-12-06 10:16:48.987 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1073: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  6 05:16:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:16:49 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:49 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:49 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:49.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:50.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:50 np0005548915 podman[279461]: 2025-12-06 10:16:50.51070904 +0000 UTC m=+0.128521831 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Dec  6 05:16:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:50] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec  6 05:16:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:50] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec  6 05:16:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1074: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  6 05:16:51 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:51 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:51 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:51.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:51 np0005548915 nova_compute[254819]: 2025-12-06 10:16:51.977 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:16:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:52.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:16:52 np0005548915 nova_compute[254819]: 2025-12-06 10:16:52.132 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:16:52 np0005548915 nova_compute[254819]: 2025-12-06 10:16:52.162 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:16:52 np0005548915 nova_compute[254819]: 2025-12-06 10:16:52.162 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:16:52 np0005548915 nova_compute[254819]: 2025-12-06 10:16:52.162 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:16:52 np0005548915 nova_compute[254819]: 2025-12-06 10:16:52.182 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 05:16:52 np0005548915 nova_compute[254819]: 2025-12-06 10:16:52.182 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:16:52 np0005548915 nova_compute[254819]: 2025-12-06 10:16:52.182 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:16:52 np0005548915 nova_compute[254819]: 2025-12-06 10:16:52.183 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:16:52 np0005548915 nova_compute[254819]: 2025-12-06 10:16:52.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:16:52 np0005548915 nova_compute[254819]: 2025-12-06 10:16:52.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:16:52 np0005548915 nova_compute[254819]: 2025-12-06 10:16:52.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:16:53 np0005548915 nova_compute[254819]: 2025-12-06 10:16:53.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:16:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1075: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  6 05:16:53 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:53 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:53 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:53.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:16:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:16:53 np0005548915 nova_compute[254819]: 2025-12-06 10:16:53.990 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:16:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:16:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:16:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:16:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:16:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:16:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:54.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:54.247 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:16:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:54.247 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:16:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:16:54.247 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:16:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:16:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1076: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  6 05:16:55 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:55 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:55 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:55.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:56.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:56 np0005548915 podman[279519]: 2025-12-06 10:16:56.45496373 +0000 UTC m=+0.077372341 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 05:16:56 np0005548915 nova_compute[254819]: 2025-12-06 10:16:56.979 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:57.676Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:16:57 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1077: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  6 05:16:57 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:57 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:57 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:57.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:16:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:58.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:16:58 np0005548915 nova_compute[254819]: 2025-12-06 10:16:58.994 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:16:59 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  6 05:16:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:16:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:16:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:16:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:59.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:00.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:17:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:17:01 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  6 05:17:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:01.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:01 np0005548915 nova_compute[254819]: 2025-12-06 10:17:01.984 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:17:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:02.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:17:03 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  6 05:17:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:03.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:03 np0005548915 nova_compute[254819]: 2025-12-06 10:17:03.997 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:04.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101704 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  6 05:17:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [NOTICE] 339/101704 (4) : haproxy version is 2.3.17-d1c9119
Dec  6 05:17:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [NOTICE] 339/101704 (4) : path to executable is /usr/local/sbin/haproxy
Dec  6 05:17:04 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [ALERT] 339/101704 (4) : backend 'backend' has no server available!
Dec  6 05:17:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:17:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  6 05:17:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:05.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:06.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:06 np0005548915 nova_compute[254819]: 2025-12-06 10:17:06.985 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:07.677Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:17:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  6 05:17:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:07.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:08.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:17:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:17:09 np0005548915 nova_compute[254819]: 2025-12-06 10:17:09.001 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:17:09 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Dec  6 05:17:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:09.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:10.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:17:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:17:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Dec  6 05:17:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:11.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:11 np0005548915 nova_compute[254819]: 2025-12-06 10:17:11.988 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:17:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:17:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:17:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:17:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:17:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:17:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:17:12 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:17:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:17:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:17:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:17:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:17:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:17:12 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:17:12 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:17:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:12.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:12 np0005548915 podman[279725]: 2025-12-06 10:17:12.742918744 +0000 UTC m=+0.113861816 container create f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:17:12 np0005548915 podman[279725]: 2025-12-06 10:17:12.672171944 +0000 UTC m=+0.043115096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:17:12 np0005548915 systemd[1]: Started libpod-conmon-f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08.scope.
Dec  6 05:17:12 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:17:12 np0005548915 podman[279725]: 2025-12-06 10:17:12.854900578 +0000 UTC m=+0.225843730 container init f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_driscoll, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  6 05:17:12 np0005548915 podman[279725]: 2025-12-06 10:17:12.865162145 +0000 UTC m=+0.236105247 container start f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  6 05:17:12 np0005548915 podman[279725]: 2025-12-06 10:17:12.870653844 +0000 UTC m=+0.241597016 container attach f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_driscoll, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 05:17:12 np0005548915 adoring_driscoll[279743]: 167 167
Dec  6 05:17:12 np0005548915 podman[279725]: 2025-12-06 10:17:12.874155598 +0000 UTC m=+0.245098680 container died f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_driscoll, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:17:12 np0005548915 systemd-logind[795]: New session 56 of user zuul.
Dec  6 05:17:12 np0005548915 systemd[1]: libpod-f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08.scope: Deactivated successfully.
Dec  6 05:17:12 np0005548915 systemd[1]: Started Session 56 of User zuul.
Dec  6 05:17:12 np0005548915 systemd[1]: var-lib-containers-storage-overlay-9c55d6728a188dbcd5ea44547e53b3f3c67d8e275c197f3f5ac1c85367e52ab9-merged.mount: Deactivated successfully.
Dec  6 05:17:12 np0005548915 podman[279725]: 2025-12-06 10:17:12.928172036 +0000 UTC m=+0.299115108 container remove f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_driscoll, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:17:12 np0005548915 systemd[1]: libpod-conmon-f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08.scope: Deactivated successfully.
Dec  6 05:17:13 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:17:13 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:17:13 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:17:13 np0005548915 podman[279796]: 2025-12-06 10:17:13.100865949 +0000 UTC m=+0.050173926 container create 3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:17:13 np0005548915 systemd[1]: Started libpod-conmon-3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711.scope.
Dec  6 05:17:13 np0005548915 podman[279796]: 2025-12-06 10:17:13.0808941 +0000 UTC m=+0.030202057 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:17:13 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:17:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6acb06397ef6f4f17be03a34a67d3a8a01dc8854a255c124f70025d68d0574/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:17:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6acb06397ef6f4f17be03a34a67d3a8a01dc8854a255c124f70025d68d0574/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:17:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6acb06397ef6f4f17be03a34a67d3a8a01dc8854a255c124f70025d68d0574/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:17:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6acb06397ef6f4f17be03a34a67d3a8a01dc8854a255c124f70025d68d0574/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:17:13 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6acb06397ef6f4f17be03a34a67d3a8a01dc8854a255c124f70025d68d0574/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:17:13 np0005548915 podman[279796]: 2025-12-06 10:17:13.220700695 +0000 UTC m=+0.170008652 container init 3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lamport, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 05:17:13 np0005548915 podman[279796]: 2025-12-06 10:17:13.237049206 +0000 UTC m=+0.186357143 container start 3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lamport, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:17:13 np0005548915 podman[279796]: 2025-12-06 10:17:13.24012285 +0000 UTC m=+0.189430787 container attach 3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lamport, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  6 05:17:13 np0005548915 nifty_lamport[279814]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:17:13 np0005548915 nifty_lamport[279814]: --> All data devices are unavailable
Dec  6 05:17:13 np0005548915 podman[279796]: 2025-12-06 10:17:13.620712625 +0000 UTC m=+0.570020562 container died 3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lamport, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 05:17:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:17:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:13.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:14 np0005548915 systemd[1]: libpod-3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711.scope: Deactivated successfully.
Dec  6 05:17:14 np0005548915 nova_compute[254819]: 2025-12-06 10:17:14.046 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:14 np0005548915 systemd[1]: var-lib-containers-storage-overlay-db6acb06397ef6f4f17be03a34a67d3a8a01dc8854a255c124f70025d68d0574-merged.mount: Deactivated successfully.
Dec  6 05:17:14 np0005548915 podman[279796]: 2025-12-06 10:17:14.084851847 +0000 UTC m=+1.034159774 container remove 3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lamport, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 05:17:14 np0005548915 systemd[1]: libpod-conmon-3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711.scope: Deactivated successfully.
Dec  6 05:17:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:14.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:14 np0005548915 podman[279995]: 2025-12-06 10:17:14.675532596 +0000 UTC m=+0.045796607 container create 78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:17:14 np0005548915 systemd[1]: Started libpod-conmon-78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9.scope.
Dec  6 05:17:14 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:17:14 np0005548915 podman[279995]: 2025-12-06 10:17:14.65865806 +0000 UTC m=+0.028922071 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:17:14 np0005548915 podman[279995]: 2025-12-06 10:17:14.762324369 +0000 UTC m=+0.132588400 container init 78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 05:17:14 np0005548915 podman[279995]: 2025-12-06 10:17:14.768893716 +0000 UTC m=+0.139157727 container start 78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bhaskara, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  6 05:17:14 np0005548915 podman[279995]: 2025-12-06 10:17:14.772619838 +0000 UTC m=+0.142883849 container attach 78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:17:14 np0005548915 crazy_bhaskara[280045]: 167 167
Dec  6 05:17:14 np0005548915 systemd[1]: libpod-78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9.scope: Deactivated successfully.
Dec  6 05:17:14 np0005548915 podman[279995]: 2025-12-06 10:17:14.773907462 +0000 UTC m=+0.144171473 container died 78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:17:14 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a6493d543af632e5cf14a873aeb0887ce30f41e86e90be03efadd39b544a044a-merged.mount: Deactivated successfully.
Dec  6 05:17:14 np0005548915 podman[279995]: 2025-12-06 10:17:14.810948162 +0000 UTC m=+0.181212183 container remove 78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:17:14 np0005548915 systemd[1]: libpod-conmon-78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9.scope: Deactivated successfully.
Dec  6 05:17:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:17:15 np0005548915 podman[280081]: 2025-12-06 10:17:15.008515527 +0000 UTC m=+0.059285282 container create 1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_shaw, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 05:17:15 np0005548915 podman[280081]: 2025-12-06 10:17:14.971537739 +0000 UTC m=+0.022307484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:17:15 np0005548915 systemd[1]: Started libpod-conmon-1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153.scope.
Dec  6 05:17:15 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:17:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee41b0d2f710803bfcf68d3d2d1a6b5207a36b2c9db0fbba0e1f6ac6c41ce7a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:17:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee41b0d2f710803bfcf68d3d2d1a6b5207a36b2c9db0fbba0e1f6ac6c41ce7a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:17:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee41b0d2f710803bfcf68d3d2d1a6b5207a36b2c9db0fbba0e1f6ac6c41ce7a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:17:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee41b0d2f710803bfcf68d3d2d1a6b5207a36b2c9db0fbba0e1f6ac6c41ce7a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:17:15 np0005548915 podman[280081]: 2025-12-06 10:17:15.114599761 +0000 UTC m=+0.165369556 container init 1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_shaw, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:17:15 np0005548915 podman[280081]: 2025-12-06 10:17:15.121575029 +0000 UTC m=+0.172344774 container start 1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:17:15 np0005548915 podman[280081]: 2025-12-06 10:17:15.128197858 +0000 UTC m=+0.178967594 container attach 1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_shaw, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  6 05:17:15 np0005548915 magical_shaw[280111]: {
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:    "1": [
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:        {
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:            "devices": [
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:                "/dev/loop3"
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:            ],
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:            "lv_name": "ceph_lv0",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:            "lv_size": "21470642176",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:            "name": "ceph_lv0",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:            "tags": {
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:                "ceph.cluster_name": "ceph",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:                "ceph.crush_device_class": "",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:                "ceph.encrypted": "0",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:                "ceph.osd_id": "1",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:                "ceph.type": "block",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:                "ceph.vdo": "0",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:                "ceph.with_tpm": "0"
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:            },
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:            "type": "block",
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:            "vg_name": "ceph_vg0"
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:        }
Dec  6 05:17:15 np0005548915 magical_shaw[280111]:    ]
Dec  6 05:17:15 np0005548915 magical_shaw[280111]: }
Dec  6 05:17:15 np0005548915 systemd[1]: libpod-1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153.scope: Deactivated successfully.
Dec  6 05:17:15 np0005548915 podman[280081]: 2025-12-06 10:17:15.428799875 +0000 UTC m=+0.479569600 container died 1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_shaw, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:17:15 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ee41b0d2f710803bfcf68d3d2d1a6b5207a36b2c9db0fbba0e1f6ac6c41ce7a5-merged.mount: Deactivated successfully.
Dec  6 05:17:15 np0005548915 podman[280081]: 2025-12-06 10:17:15.46714373 +0000 UTC m=+0.517913445 container remove 1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:17:15 np0005548915 systemd[1]: libpod-conmon-1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153.scope: Deactivated successfully.
Dec  6 05:17:15 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25475 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:15 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17010 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:15 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17016 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  6 05:17:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:15.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:16 np0005548915 podman[280275]: 2025-12-06 10:17:16.008775194 +0000 UTC m=+0.052379015 container create 95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_poincare, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  6 05:17:16 np0005548915 systemd[1]: Started libpod-conmon-95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc.scope.
Dec  6 05:17:16 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:17:16 np0005548915 podman[280275]: 2025-12-06 10:17:15.99156127 +0000 UTC m=+0.035165121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:17:16 np0005548915 podman[280275]: 2025-12-06 10:17:16.099181875 +0000 UTC m=+0.142785746 container init 95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_poincare, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  6 05:17:16 np0005548915 podman[280275]: 2025-12-06 10:17:16.10639291 +0000 UTC m=+0.149996751 container start 95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_poincare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 05:17:16 np0005548915 podman[280275]: 2025-12-06 10:17:16.110733157 +0000 UTC m=+0.154336988 container attach 95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:17:16 np0005548915 elated_poincare[280311]: 167 167
Dec  6 05:17:16 np0005548915 systemd[1]: libpod-95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc.scope: Deactivated successfully.
Dec  6 05:17:16 np0005548915 podman[280275]: 2025-12-06 10:17:16.112883155 +0000 UTC m=+0.156486986 container died 95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_poincare, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  6 05:17:16 np0005548915 podman[280308]: 2025-12-06 10:17:16.119829553 +0000 UTC m=+0.070704821 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 05:17:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:16.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:16 np0005548915 systemd[1]: var-lib-containers-storage-overlay-e1221786c2bf1df9f41a96794a794db18d5744c68accfdc65422246a993177e6-merged.mount: Deactivated successfully.
Dec  6 05:17:16 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25484 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:16 np0005548915 podman[280275]: 2025-12-06 10:17:16.1560451 +0000 UTC m=+0.199648931 container remove 95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_poincare, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  6 05:17:16 np0005548915 systemd[1]: libpod-conmon-95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc.scope: Deactivated successfully.
Dec  6 05:17:16 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26341 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:16 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17028 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:16 np0005548915 podman[280357]: 2025-12-06 10:17:16.367713206 +0000 UTC m=+0.047571186 container create 171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:17:16 np0005548915 systemd[1]: Started libpod-conmon-171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d.scope.
Dec  6 05:17:16 np0005548915 podman[280357]: 2025-12-06 10:17:16.349970797 +0000 UTC m=+0.029828827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:17:16 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:17:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e755c6c6bea820afc6d081b53f5851a9b2d868b748635dec23b51762f43aed17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:17:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e755c6c6bea820afc6d081b53f5851a9b2d868b748635dec23b51762f43aed17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:17:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e755c6c6bea820afc6d081b53f5851a9b2d868b748635dec23b51762f43aed17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:17:16 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e755c6c6bea820afc6d081b53f5851a9b2d868b748635dec23b51762f43aed17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:17:16 np0005548915 podman[280357]: 2025-12-06 10:17:16.478720623 +0000 UTC m=+0.158578633 container init 171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 05:17:16 np0005548915 podman[280357]: 2025-12-06 10:17:16.48528795 +0000 UTC m=+0.165145940 container start 171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mayer, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS)
Dec  6 05:17:16 np0005548915 podman[280357]: 2025-12-06 10:17:16.505588808 +0000 UTC m=+0.185446818 container attach 171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mayer, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  6 05:17:16 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec  6 05:17:16 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3306672627' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  6 05:17:16 np0005548915 nova_compute[254819]: 2025-12-06 10:17:16.991 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:17 np0005548915 lvm[280491]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:17:17 np0005548915 lvm[280491]: VG ceph_vg0 finished
Dec  6 05:17:17 np0005548915 gracious_mayer[280392]: {}
Dec  6 05:17:17 np0005548915 systemd[1]: libpod-171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d.scope: Deactivated successfully.
Dec  6 05:17:17 np0005548915 podman[280357]: 2025-12-06 10:17:17.178691933 +0000 UTC m=+0.858549923 container died 171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:17:17 np0005548915 systemd[1]: libpod-171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d.scope: Consumed 1.146s CPU time.
Dec  6 05:17:17 np0005548915 systemd[1]: var-lib-containers-storage-overlay-e755c6c6bea820afc6d081b53f5851a9b2d868b748635dec23b51762f43aed17-merged.mount: Deactivated successfully.
Dec  6 05:17:17 np0005548915 podman[280357]: 2025-12-06 10:17:17.407455079 +0000 UTC m=+1.087313069 container remove 171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mayer, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  6 05:17:17 np0005548915 systemd[1]: libpod-conmon-171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d.scope: Deactivated successfully.
Dec  6 05:17:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:17:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:17:17 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:17:17 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:17:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:17.678Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:17:17 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Dec  6 05:17:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:17:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:17.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:17:18 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:17:18 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:17:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:18.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:19 np0005548915 nova_compute[254819]: 2025-12-06 10:17:19.051 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:17:19 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Dec  6 05:17:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:19.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:20.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:17:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:17:21 np0005548915 podman[280625]: 2025-12-06 10:17:21.216622678 +0000 UTC m=+0.106052835 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 05:17:21 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Dec  6 05:17:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:21.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:21 np0005548915 nova_compute[254819]: 2025-12-06 10:17:21.993 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:22 np0005548915 ovs-vsctl[280679]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  6 05:17:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:17:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:22.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:17:23 np0005548915 virtqemud[254445]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  6 05:17:23 np0005548915 virtqemud[254445]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  6 05:17:23 np0005548915 virtqemud[254445]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  6 05:17:23 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: cache status {prefix=cache status} (starting...)
Dec  6 05:17:23 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:17:23 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: client ls {prefix=client ls} (starting...)
Dec  6 05:17:23 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:17:23 np0005548915 lvm[281019]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:17:23 np0005548915 lvm[281019]: VG ceph_vg0 finished
Dec  6 05:17:23 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Dec  6 05:17:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:17:23
Dec  6 05:17:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:17:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:17:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['.nfs', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'volumes', 'images', '.mgr', 'backups', 'cephfs.cephfs.data']
Dec  6 05:17:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:17:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:17:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:17:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:23.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:17:24 np0005548915 nova_compute[254819]: 2025-12-06 10:17:24.054 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:24.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26362 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17049 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25508 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:24 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: damage ls {prefix=damage ls} (starting...)
Dec  6 05:17:24 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26380 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:24 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump loads {prefix=dump loads} (starting...)
Dec  6 05:17:24 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:17:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  6 05:17:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  6 05:17:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  6 05:17:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2295855214' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  6 05:17:24 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  6 05:17:24 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17073 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:24 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  6 05:17:24 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25523 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  6 05:17:24 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/439237568' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  6 05:17:24 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:17:24 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26395 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:24 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  6 05:17:24 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:17:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:17:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1092890167' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:17:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  6 05:17:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:17:25 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17091 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:25 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25532 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:25 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26410 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  6 05:17:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:17:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  6 05:17:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:17:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Dec  6 05:17:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2707223938' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  6 05:17:25 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17106 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:25 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25556 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: ops {prefix=ops} (starting...)
Dec  6 05:17:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:17:25 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 0 op/s
Dec  6 05:17:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:25.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec  6 05:17:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1289716866' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec  6 05:17:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:26.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec  6 05:17:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469772445' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec  6 05:17:26 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26443 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:26 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: session ls {prefix=session ls} (starting...)
Dec  6 05:17:26 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:17:26 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17148 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:26 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25586 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  6 05:17:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3669643766' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  6 05:17:26 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: status {prefix=status} (starting...)
Dec  6 05:17:26 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26467 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:26 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17172 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:26 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25592 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:26 np0005548915 nova_compute[254819]: 2025-12-06 10:17:26.995 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  6 05:17:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187974181' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  6 05:17:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  6 05:17:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  6 05:17:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  6 05:17:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/957817502' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  6 05:17:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  6 05:17:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  6 05:17:27 np0005548915 podman[281530]: 2025-12-06 10:17:27.443447846 +0000 UTC m=+0.064488602 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 05:17:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  6 05:17:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2900583648' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  6 05:17:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:27.679Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:17:27 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Dec  6 05:17:27 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26524 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T10:17:27.929+0000 7f35ec3cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  6 05:17:27 np0005548915 ceph-mgr[74618]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  6 05:17:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:27.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:28.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:28 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17232 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:28 np0005548915 ceph-mgr[74618]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  6 05:17:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T10:17:28.181+0000 7f35ec3cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  6 05:17:28 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:28 np0005548915 ceph-mgr[74618]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  6 05:17:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T10:17:28.249+0000 7f35ec3cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  6 05:17:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  6 05:17:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3536510514' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  6 05:17:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec  6 05:17:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1180736574' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  6 05:17:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  6 05:17:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3936739377' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  6 05:17:28 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26560 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:29 np0005548915 nova_compute[254819]: 2025-12-06 10:17:29.058 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec  6 05:17:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2248478302' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec  6 05:17:29 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17295 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:29 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26587 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:29 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25670 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:29 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26605 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  6 05:17:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2884993703' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  6 05:17:29 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25679 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:29 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17319 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:29 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:17:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:17:29 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26626 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:29 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25694 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:30.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:30 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17337 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 4055040 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981173 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 4046848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.767833710s of 27.774271011s, submitted: 1
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 4046848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 4038656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 4038656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 4038656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978769 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 4030464 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 4022272 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 3997696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2116000 session 0x55fce19c1680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 3997696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 3989504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979690 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 3989504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 3989504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 3981312 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 3981312 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.826157570s of 13.835227013s, submitted: 3
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 3973120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979558 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 3973120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 3973120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 3964928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 3964928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 3948544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979690 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 3948544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 3940352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 3940352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981202 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 3923968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 4300800 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 4300800 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.431484222s of 14.688600540s, submitted: 3
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 4292608 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980611 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 4292608 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 4284416 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 4284416 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 4276224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 4276224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 4276224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 4268032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 4268032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 4268032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 4259840 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 4259840 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 4251648 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 4251648 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 4243456 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 4243456 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 4243456 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 4235264 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 4235264 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 4227072 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 4227072 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 4227072 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 4218880 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 4218880 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 4202496 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 4202496 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 4202496 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 4194304 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 4194304 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 4186112 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 4186112 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 4177920 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 4177920 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 4169728 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 4169728 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 4161536 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 4153344 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 4153344 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 4145152 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 4145152 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 4136960 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 4128768 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 4128768 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 4112384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 4112384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf8c6000 session 0x55fce0e885a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 4112384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 4104192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 4104192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 4096000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 4096000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 4096000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 4087808 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 4079616 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 4079616 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 4079616 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 4071424 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 4071424 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 4071424 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 57.066238403s of 57.868534088s, submitted: 2
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 4063232 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 4063232 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 4046848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982123 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 4046848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f89000 session 0x55fce11163c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2068800 session 0x55fce0e86b40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 4038656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 4038656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 4038656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 4030464 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983635 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 4030464 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 4022272 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 4022272 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82976768 unmapped: 4014080 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.951421738s of 12.081957817s, submitted: 3
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82976768 unmapped: 4014080 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983503 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 4005888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 4005888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 4005888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 3997696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 3997696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983635 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 3989504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 3989504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 3981312 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 3973120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 3973120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983635 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 3964928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 3964928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 3956736 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 3956736 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.536516190s of 15.642930984s, submitted: 2
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 3948544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982453 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 3948544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 3948544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 3940352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 3940352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 3940352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982321 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 3923968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2134800 session 0x55fce23c01e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f87400 session 0x55fcdfbacd20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 3923968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982321 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 3915776 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 3915776 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 3915776 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 3907584 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 3907584 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982321 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 3899392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 3899392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 3891200 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 3883008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 3883008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.969430923s of 20.490276337s, submitted: 3
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982453 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 3874816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 3874816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 3866624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 3866624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:30.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 3866624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982453 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 3858432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 3858432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 3850240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 3850240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 3850240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983965 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 3842048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 3842048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 3833856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 3833856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 3825664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983965 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 3825664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.678708076s of 16.780500412s, submitted: 2
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 3825664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 3825664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 3809280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 3809280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 3801088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 3801088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 3801088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 3792896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 3792896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 3784704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 3784704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 3776512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 3776512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 3776512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 3776512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 3768320 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 3768320 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 3760128 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 3760128 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 3751936 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 3751936 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 3743744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 3743744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 3743744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 3735552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 3735552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 3727360 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 3719168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 3719168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 3710976 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 3710976 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 3702784 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 3702784 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 3702784 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 3694592 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 3694592 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 3694592 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 3686400 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 3686400 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 3678208 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 3678208 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 3670016 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 3670016 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 3661824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 3661824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 3661824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 3653632 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 3645440 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 3645440 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 3637248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 3637248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 3620864 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 3620864 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 3612672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 3612672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 3612672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 3604480 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 3604480 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 3596288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 3596288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 3588096 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 3588096 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 3588096 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 3579904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 3579904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 3571712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 3571712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 3563520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 3563520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 3563520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 3555328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 3555328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 3547136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 3538944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 3538944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 3530752 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 3530752 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 3522560 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f8a000 session 0x55fce23c0960
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 3522560 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 3522560 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 3514368 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 3514368 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 3506176 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 3506176 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 3497984 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 3497984 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 3497984 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 3489792 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 3489792 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 88.595787048s of 88.759666443s, submitted: 1
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983965 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 3481600 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 3481600 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 3473408 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 3465216 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 3465216 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985477 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 3448832 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 3448832 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 3440640 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 3440640 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 3440640 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984886 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 3432448 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 3432448 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 3432448 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8411 writes, 34K keys, 8411 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 8411 writes, 1732 syncs, 4.86 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8411 writes, 34K keys, 8411 commit groups, 1.0 writes per commit group, ingest: 21.58 MB, 0.04 MB/s#012Interval WAL: 8411 writes, 1732 syncs, 4.86 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 3358720 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 3358720 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984886 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 3350528 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.758087158s of 16.814682007s, submitted: 3
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 3350528 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 3342336 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 3342336 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 3334144 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 3334144 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 3334144 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 3325952 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 3325952 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 3325952 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 3317760 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 3317760 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 3309568 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 3301376 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 3293184 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 3293184 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 3293184 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 3284992 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 3284992 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 3276800 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 3276800 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 3276800 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 3268608 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 3268608 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 3252224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 3252224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 3244032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 3244032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 3244032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 3235840 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 3235840 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 3235840 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 3227648 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2068800 session 0x55fce1fae960
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 3227648 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 3219456 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 3219456 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 3211264 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 3211264 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 3211264 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 3203072 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 3203072 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 3194880 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 3194880 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 3186688 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.575794220s of 42.580799103s, submitted: 1
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 3186688 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984886 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 3186688 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 3178496 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 3178496 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 3170304 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 3170304 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986398 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 3170304 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 3162112 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 3162112 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 3153920 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 3153920 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985807 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 3145728 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 3145728 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 3145728 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 3137536 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.588759422s of 15.598855019s, submitted: 3
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 3137536 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 3129344 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 3129344 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 3129344 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 3121152 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 3121152 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 3112960 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 3112960 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 3112960 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 3104768 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 3104768 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 3096576 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 3096576 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 3096576 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 3088384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 3088384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 3055616 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 3055616 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 3047424 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 3047424 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 3047424 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 3039232 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 3039232 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 3039232 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 3031040 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 3022848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 3022848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 3022848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 3014656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 3014656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 3006464 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 3006464 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 3006464 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 2998272 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 2998272 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 2990080 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 2990080 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 2981888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 2981888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 2981888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 2973696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 2973696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 2973696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 2965504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 2965504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 2957312 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 2957312 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 2949120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 2949120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 2940928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 2940928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 2940928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 2932736 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 2932736 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 2932736 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 2916352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 2916352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 2916352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 2916352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 2916352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 2916352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 80.777481079s of 80.781913757s, submitted: 1
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 3833856 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 3768320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,3])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 3768320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f89000 session 0x55fce23c10e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf8c6000 session 0x55fce1e9a780
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 3670016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.180529594s of 14.258139610s, submitted: 375
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 3645440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985807 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 3645440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 3645440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987319 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986728 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.019550323s of 14.427960396s, submitted: 3
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f87400 session 0x55fcdf94c5a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 64.312339783s of 64.318359375s, submitted: 1
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986728 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986728 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.242930412s of 11.255201340s, submitted: 2
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f8a000 session 0x55fce1faed20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2116000 session 0x55fce1faf0e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 46.314544678s of 46.319801331s, submitted: 1
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986137 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987649 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989161 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.012979507s of 13.046799660s, submitted: 3
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989029 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989029 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989029 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf3a9000 session 0x55fce0e881e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf1d9800 session 0x55fce1117a40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989029 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989029 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.331491470s of 27.335552216s, submitted: 1
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989161 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990673 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990082 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.205703735s of 12.221550941s, submitted: 3
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf3a9000 session 0x55fce0e892c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2068800 session 0x55fce245f4a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 49.344425201s of 49.353523254s, submitted: 2
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f87400 session 0x55fce22ff860
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf1d9800 session 0x55fce1ed5680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989491 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989491 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.083337784s of 14.100935936s, submitted: 1
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989623 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989491 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988900 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.310416222s of 14.321680069s, submitted: 3
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2135800 session 0x55fce1f0ab40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 63.857799530s of 63.862625122s, submitted: 1
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988900 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.674135208s of 16.811328888s, submitted: 2
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce212f800 session 0x55fce1e64f00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9187 writes, 35K keys, 9187 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9187 writes, 2104 syncs, 4.37 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 776 writes, 1212 keys, 776 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s#012Interval WAL: 776 writes, 372 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce212a400 session 0x55fce2304000
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2128400 session 0x55fce232eb40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 114.850975037s of 114.855049133s, submitted: 1
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991924 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.075698853s of 12.083848000s, submitted: 2
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread fragmentation_score=0.000032 took=0.000044s
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f87400 session 0x55fce23ebe00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce212f800 session 0x55fce0f85680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 88.896926880s of 93.672317505s, submitted: 2
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991333 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992845 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992845 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.079085350s of 12.086176872s, submitted: 2
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 2211840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf1d9800 session 0x55fce0e87c20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 59.565917969s of 60.611633301s, submitted: 340
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993766 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993766 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993175 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.864642143s of 14.892098427s, submitted: 3
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 93.336196899s of 93.339126587s, submitted: 1
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996809 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87080960 unmapped: 18792448 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x973707/0xa32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 148 ms_handle_reset con 0x55fce2128400 session 0x55fcdf1c2b40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 18784256 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x973707/0xa32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [1])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87105536 unmapped: 27164672 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 27156480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x1175832/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 150 ms_handle_reset con 0x55fce212a400 session 0x55fcdfbad860
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115338 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb5d1000/0x0/0x4ffc00000, data 0x117793a/0x1239000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f8a000 session 0x55fcdfbb9e00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce2116000 session 0x55fce1c6cb40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f87400 session 0x55fcdfbac1e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.056060791s of 33.534233093s, submitted: 52
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117304 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118108 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118108 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.519258499s of 12.529978752s, submitted: 3
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce212f800 session 0x55fce2305a40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce2136000 session 0x55fce0f843c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f8b400 session 0x55fce236e000
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88227840 unmapped: 26042368 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120409 data_alloc: 218103808 data_used: 270336
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88227840 unmapped: 26042368 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f88c00 session 0x55fce1e9ba40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96247808 unmapped: 18022400 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce2069400 session 0x55fce23052c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96231424 unmapped: 18038784 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f88c00 session 0x55fce23c0b40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f8b400 session 0x55fcdf19e5a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce212f800 session 0x55fce23c1c20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce2136000 session 0x55fce0f87860
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f88400 session 0x55fcdfbb63c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f88c00 session 0x55fce23bda40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177895 data_alloc: 218103808 data_used: 7086080
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f8b400 session 0x55fce112d680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb1fc000/0x0/0x4ffc00000, data 0x1547bbd/0x160f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce212f800 session 0x55fce23c01e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.694509506s of 10.920597076s, submitted: 65
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce2136000 session 0x55fce1e9a960
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb1d8000/0x0/0x4ffc00000, data 0x156bbcd/0x1634000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96911360 unmapped: 17358848 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb1d8000/0x0/0x4ffc00000, data 0x156bbcd/0x1634000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96911360 unmapped: 17358848 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13844480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211407 data_alloc: 234881024 data_used: 11067392
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13844480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13844480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fb1d4000/0x0/0x4ffc00000, data 0x156db9f/0x1637000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128400 session 0x55fcdf1e0b40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce22ff4a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211407 data_alloc: 234881024 data_used: 11067392
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fb1d4000/0x0/0x4ffc00000, data 0x156db9f/0x1637000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.842867851s of 12.866385460s, submitted: 19
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101531648 unmapped: 12738560 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239575 data_alloc: 234881024 data_used: 11247616
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 102121472 unmapped: 12148736 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243807 data_alloc: 234881024 data_used: 11247616
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.411628723s of 10.562047958s, submitted: 38
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247415 data_alloc: 234881024 data_used: 11251712
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246824 data_alloc: 234881024 data_used: 11251712
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 12541952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 12541952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246101 data_alloc: 234881024 data_used: 11251712
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 12541952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212f800 session 0x55fce112c3c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 11771904 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.780336380s of 11.795572281s, submitted: 4
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2131800 session 0x55fce0f852c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136c00 session 0x55fce1e9be00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f86400 session 0x55fce0f803c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce1a01a40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d800 session 0x55fcdeddcf00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b53000/0x0/0x4ffc00000, data 0x1a4fb9f/0x1b19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 9502720 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b53000/0x0/0x4ffc00000, data 0x1a4fb9f/0x1b19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 9502720 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312002 data_alloc: 234881024 data_used: 11780096
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94d0000/0x0/0x4ffc00000, data 0x20d2b9f/0x219c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce1f0b2c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce245d4a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94d0000/0x0/0x4ffc00000, data 0x20d2b9f/0x219c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1efd800 session 0x55fce0f841e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fcdf1e0f00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314613 data_alloc: 234881024 data_used: 11780096
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 9469952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ce000/0x0/0x4ffc00000, data 0x20d2bd2/0x219e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 9469952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 2572288 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364145 data_alloc: 234881024 data_used: 19132416
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ce000/0x0/0x4ffc00000, data 0x20d2bd2/0x219e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ce000/0x0/0x4ffc00000, data 0x20d2bd2/0x219e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364145 data_alloc: 234881024 data_used: 19132416
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 2531328 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 2531328 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8b400 session 0x55fce1c6de00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f88c00 session 0x55fce1e64000
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.690454483s of 20.819118500s, submitted: 30
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 958464 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114335744 unmapped: 3080192 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 2711552 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418941 data_alloc: 234881024 data_used: 19755008
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb6000/0x0/0x4ffc00000, data 0x26e9bd2/0x27b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 2678784 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 2678784 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb6000/0x0/0x4ffc00000, data 0x26e9bd2/0x27b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 2646016 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 2646016 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417293 data_alloc: 234881024 data_used: 19755008
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb4000/0x0/0x4ffc00000, data 0x26ecbd2/0x27b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb4000/0x0/0x4ffc00000, data 0x26ecbd2/0x27b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.662245750s of 10.842704773s, submitted: 64
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce0f80f00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce19c1a40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce23bc3c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce8000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258778 data_alloc: 234881024 data_used: 11780096
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212a400 session 0x55fcdfbad680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce1e65680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce8000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdf19fe00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f86c00 session 0x55fce1c6d2c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109387776 unmapped: 8028160 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce8000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,1])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce2305860
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173167 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173167 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.095330238s of 13.319671631s, submitted: 68
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172284 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171693 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171693 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.733273506s of 15.748806000s, submitted: 4
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171561 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce1f0a1e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f88c00 session 0x55fce1f0a5a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce1f0b0e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8b400 session 0x55fcdeddd860
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8b400 session 0x55fce19ff680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98e3000/0x0/0x4ffc00000, data 0x1cc2b1d/0x1d89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdfbb7680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260916 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f88c00 session 0x55fcdf1c2f00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98e3000/0x0/0x4ffc00000, data 0x1cc2b1d/0x1d89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fcdfbb6d20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdf1dad20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.256204605s of 10.393723488s, submitted: 26
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 27320320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 27320320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 21430272 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340884 data_alloc: 234881024 data_used: 19124224
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 21430272 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98bf000/0x0/0x4ffc00000, data 0x1ce6b1d/0x1dad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 21430272 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98bf000/0x0/0x4ffc00000, data 0x1ce6b1d/0x1dad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340884 data_alloc: 234881024 data_used: 19124224
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98bf000/0x0/0x4ffc00000, data 0x1ce6b1d/0x1dad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.100857735s of 12.104346275s, submitted: 1
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407304 data_alloc: 234881024 data_used: 19488768
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 20488192 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9033000/0x0/0x4ffc00000, data 0x2572b1d/0x2639000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415702 data_alloc: 234881024 data_used: 19476480
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 18857984 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0x2611b1d/0x26d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414798 data_alloc: 234881024 data_used: 19476480
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0x2611b1d/0x26d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.115612030s of 12.360255241s, submitted: 80
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414878 data_alloc: 234881024 data_used: 19476480
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8e000/0x0/0x4ffc00000, data 0x2617b1d/0x26de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8e000/0x0/0x4ffc00000, data 0x2617b1d/0x26de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414966 data_alloc: 234881024 data_used: 19476480
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8b000/0x0/0x4ffc00000, data 0x261ab1d/0x26e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 18628608 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8b000/0x0/0x4ffc00000, data 0x261ab1d/0x26e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 18587648 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 18587648 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.287870407s of 13.304501534s, submitted: 4
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415814 data_alloc: 234881024 data_used: 19484672
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdf1d6960
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdfe7ed20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 18407424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce19fe000
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.696674347s of 28.839307785s, submitted: 37
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce232f0e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdf1c2000
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdfe7f4a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce2101c20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce20f7680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191346 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce20f70e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f90000/0x0/0x4ffc00000, data 0x1205b1d/0x12cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce20f6780
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce20f6960
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193160 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce20f74a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce210ad20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce23ea960
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 26378240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 26378240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199796 data_alloc: 218103808 data_used: 8167424
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199796 data_alloc: 218103808 data_used: 8167424
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.855402946s of 17.599184036s, submitted: 5
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 25788416 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226656 data_alloc: 218103808 data_used: 8298496
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226656 data_alloc: 218103808 data_used: 8298496
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.634870529s of 11.777306557s, submitted: 33
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226524 data_alloc: 218103808 data_used: 8298496
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226524 data_alloc: 218103808 data_used: 8298496
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1048400 session 0x55fcdfeab4a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fcdf1d6000
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdf1d1860
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce19fc780
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdff170e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256818 data_alloc: 218103808 data_used: 8298496
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 24870912 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117800 session 0x55fcdf1e0d20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fcdf1d63c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 24870912 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdf1d7680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.944304466s of 14.069879532s, submitted: 42
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdff16b40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 24870912 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282190 data_alloc: 234881024 data_used: 11317248
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282190 data_alloc: 234881024 data_used: 11317248
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.114115715s of 12.152852058s, submitted: 12
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 18767872 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387114 data_alloc: 234881024 data_used: 12939264
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116105216 unmapped: 17719296 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c60000/0x0/0x4ffc00000, data 0x2531bc2/0x25fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 17235968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 17235968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c60000/0x0/0x4ffc00000, data 0x2531bc2/0x25fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 17227776 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 17227776 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fcdf1d05a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401348 data_alloc: 234881024 data_used: 13160448
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 17219584 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c60000/0x0/0x4ffc00000, data 0x2531bc2/0x25fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c3f000/0x0/0x4ffc00000, data 0x2552bc2/0x261d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 2905 syncs, 3.80 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1859 writes, 5432 keys, 1859 commit groups, 1.0 writes per commit group, ingest: 5.24 MB, 0.01 MB/s#012Interval WAL: 1859 writes, 801 syncs, 2.32 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117800 session 0x55fcdff17860
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdfbb7680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.462458611s of 10.115522385s, submitted: 125
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239349 data_alloc: 218103808 data_used: 8298496
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce0f87e00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238137 data_alloc: 218103808 data_used: 8298496
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce1c6d2c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdfbb9860
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdfbb9e00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206500 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.026257515s of 12.886064529s, submitted: 81
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207421 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207289 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207289 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207289 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce19c05a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fce19c14a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdedddc20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce19fc3c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.002244949s of 23.143316269s, submitted: 3
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215687 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce19fdc20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fcdf1dab40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce19c03c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdf1e0960
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdff5c1e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f93000/0x0/0x4ffc00000, data 0x1201b2d/0x12c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce1a005a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218937 data_alloc: 218103808 data_used: 7618560
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f93000/0x0/0x4ffc00000, data 0x1201b2d/0x12c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 22364160 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 22364160 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 22364160 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222889 data_alloc: 218103808 data_used: 8151040
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222889 data_alloc: 218103808 data_used: 8151040
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 22347776 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.405853271s of 17.451101303s, submitted: 13
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115630080 unmapped: 18194432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 19800064 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 20258816 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9476000/0x0/0x4ffc00000, data 0x1d1db50/0x1de6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308431 data_alloc: 218103808 data_used: 8380416
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 20258816 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 20258816 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9476000/0x0/0x4ffc00000, data 0x1d1db50/0x1de6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307631 data_alloc: 218103808 data_used: 8380416
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9474000/0x0/0x4ffc00000, data 0x1d1fb50/0x1de8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9474000/0x0/0x4ffc00000, data 0x1d1fb50/0x1de8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307631 data_alloc: 218103808 data_used: 8380416
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.168425560s of 13.361434937s, submitted: 78
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9473000/0x0/0x4ffc00000, data 0x1d20b50/0x1de9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307855 data_alloc: 218103808 data_used: 8380416
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9473000/0x0/0x4ffc00000, data 0x1d20b50/0x1de9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 20242432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 20242432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9472000/0x0/0x4ffc00000, data 0x1d21b50/0x1dea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 20242432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307695 data_alloc: 218103808 data_used: 8380416
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 20234240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 20234240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 20234240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.022357941s of 12.031913757s, submitted: 2
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9471000/0x0/0x4ffc00000, data 0x1d22b50/0x1deb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 20217856 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9471000/0x0/0x4ffc00000, data 0x1d22b50/0x1deb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 20217856 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307703 data_alloc: 218103808 data_used: 8380416
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 20217856 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce11130e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce19fd4a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fcdfea7e00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fcdfea6960
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdfeeac00 session 0x55fce19c05a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 24076288 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd7000/0x0/0x4ffc00000, data 0x24bbb79/0x2585000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 24076288 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd7000/0x0/0x4ffc00000, data 0x24bbbb2/0x2585000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 24076288 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1372406 data_alloc: 218103808 data_used: 8380416
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fce19c03c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd4000/0x0/0x4ffc00000, data 0x24bcbb2/0x2586000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418158 data_alloc: 234881024 data_used: 15044608
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd4000/0x0/0x4ffc00000, data 0x24bcbb2/0x2586000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418158 data_alloc: 234881024 data_used: 15044608
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.528182983s of 17.646516800s, submitted: 39
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd3000/0x0/0x4ffc00000, data 0x24bdbb2/0x2587000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 19562496 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 17809408 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484588 data_alloc: 234881024 data_used: 15458304
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f85da000/0x0/0x4ffc00000, data 0x2bb5bb2/0x2c7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1485044 data_alloc: 234881024 data_used: 15536128
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.188508034s of 10.414656639s, submitted: 108
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f85dc000/0x0/0x4ffc00000, data 0x2bb6bb2/0x2c80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479996 data_alloc: 234881024 data_used: 15540224
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 17063936 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fce1a012c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce19c1a40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f85da000/0x0/0x4ffc00000, data 0x2bb6bb2/0x2c80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 17055744 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce210ad20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x1d25b50/0x1dee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320544 data_alloc: 218103808 data_used: 8380416
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.276282310s of 11.366744995s, submitted: 33
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320712 data_alloc: 218103808 data_used: 8380416
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x1d25b50/0x1dee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fcdfb2d0e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fce21010e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce1c6c3c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb40/0x1247000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228930 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228930 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228930 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.033533096s of 23.158624649s, submitted: 40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228638 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115580928 unmapped: 21397504 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115703808 unmapped: 21274624 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,1,0,1])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce1a010e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212a400 session 0x55fce1116000
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 21127168 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 21127168 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228638 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17340 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 17793024 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fcdf1d6b40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fce19fc780
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce23043c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fce1e64780
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212a400 session 0x55fce1e65a40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21078016 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286739 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21078016 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21078016 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 21069824 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.803172112s of 13.002218246s, submitted: 386
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 21069824 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 21069824 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fcdeddda40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287696 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 21045248 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 19922944 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335992 data_alloc: 234881024 data_used: 14737408
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fce19fef00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fce23bc1e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce210a5a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232510 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.927818298s of 17.092643738s, submitted: 51
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232378 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232378 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117c00 session 0x55fce20f72c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c60c00 session 0x55fce23050e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c60c00 session 0x55fce20f6000
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 24641536 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fcdfb2dc20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fce0f863c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 24633344 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ca000/0x0/0x4ffc00000, data 0x1ccab7f/0x1d92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117c00 session 0x55fce23c14a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ca000/0x0/0x4ffc00000, data 0x1ccab7f/0x1d92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325714 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.299762726s of 10.467995644s, submitted: 46
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 23732224 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce23bc000
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fcdf1dab40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120152064 unmapped: 21028864 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce19fde00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1e000 session 0x55fce112c000
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19e3000 session 0x55fce23bd0e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce112d680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce0f87680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.559396744s of 28.672395706s, submitted: 41
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce19c1c20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19e3000 session 0x55fcdeddc5a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1e000 session 0x55fce0f86000
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce1fae960
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce0e89a40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291217 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b35000/0x0/0x4ffc00000, data 0x1660b1d/0x1727000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce2101c20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce0f865a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19e3000 session 0x55fce23eaf00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1e000 session 0x55fce112dc20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293031 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 25018368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317675 data_alloc: 234881024 data_used: 11227136
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce1e650e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce1e652c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c60800 session 0x55fcdf1c3c20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317675 data_alloc: 234881024 data_used: 11227136
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: mgrc ms_handle_reset ms_handle_reset con 0x55fcdfeeb800
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3885409716
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3885409716,v1:192.168.122.100:6801/3885409716]
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: mgrc handle_mgr_configure stats_period=5
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f6f400 session 0x55fce245f680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.341133118s of 18.417297363s, submitted: 30
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 16695296 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fcdf1e10e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 18685952 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423799 data_alloc: 234881024 data_used: 11702272
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8dd8000/0x0/0x4ffc00000, data 0x23bbb2d/0x2483000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8dd8000/0x0/0x4ffc00000, data 0x23bbb2d/0x2483000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8dd8000/0x0/0x4ffc00000, data 0x23bbb2d/0x2483000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420867 data_alloc: 234881024 data_used: 11702272
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.104361534s of 11.437482834s, submitted: 145
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8db8000/0x0/0x4ffc00000, data 0x23dcb2d/0x24a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423207 data_alloc: 234881024 data_used: 11714560
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fcdfbb8960
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fcdf1e1c20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce232ed20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98f2000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256266 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257062 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.118231773s of 13.227775574s, submitted: 42
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 21962752 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 21962752 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 21962752 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256339 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256339 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 21946368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 21946368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 21946368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce19c05a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce0f863c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce0f872c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce0f87e00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.562622070s of 12.571432114s, submitted: 2
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 20856832 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce0f86d20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce20f72c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce19fef00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce236ef00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce19c14a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317200 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988c000/0x0/0x4ffc00000, data 0x1907b8f/0x19d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fcdf1e05a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 20815872 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319610 data_alloc: 218103808 data_used: 7618560
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 20815872 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdff17c20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce19fe000
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x1907bb2/0x19d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373114 data_alloc: 234881024 data_used: 15515648
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x1907bb2/0x19d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x1907bb2/0x19d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373114 data_alloc: 234881024 data_used: 15515648
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce1112d20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce210a960
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fce2101860
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce0e86d20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.591188431s of 16.790163040s, submitted: 37
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fcdfbb9e00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce20f6f00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1efcc00 session 0x55fce1e64960
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdf19fe00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fcdf19f680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 18300928 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 14155776 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 14155776 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 128925696 unmapped: 12255232 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515640 data_alloc: 234881024 data_used: 15814656
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8312000/0x0/0x4ffc00000, data 0x2a61bc1/0x2b2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 128925696 unmapped: 12255232 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce0f86f00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 129261568 unmapped: 11919360 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 129269760 unmapped: 11911168 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 9068544 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 8937472 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543820 data_alloc: 234881024 data_used: 20639744
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f82fc000/0x0/0x4ffc00000, data 0x2a85bc1/0x2b50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f82fc000/0x0/0x4ffc00000, data 0x2a85bc1/0x2b50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543229 data_alloc: 234881024 data_used: 20639744
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 8896512 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 8896512 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 8896512 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f82fc000/0x0/0x4ffc00000, data 0x2a85bc1/0x2b50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.123311996s of 17.423311234s, submitted: 113
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 7577600 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 7888896 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1607863 data_alloc: 234881024 data_used: 20910080
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 8241152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c23000/0x0/0x4ffc00000, data 0x315ebc1/0x3229000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 8208384 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 8208384 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 8200192 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 8200192 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c23000/0x0/0x4ffc00000, data 0x315ebc1/0x3229000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1607863 data_alloc: 234881024 data_used: 20910080
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c02000/0x0/0x4ffc00000, data 0x317fbc1/0x324a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1606903 data_alloc: 234881024 data_used: 20910080
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133021696 unmapped: 8159232 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.647055626s of 12.847999573s, submitted: 62
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 8093696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c02000/0x0/0x4ffc00000, data 0x317fbc1/0x324a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 8093696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 8093696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212e800 session 0x55fcdfea7e00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c56000 session 0x55fce19fc3c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 8085504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdf1d63c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1480191 data_alloc: 234881024 data_used: 15818752
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f87e9000/0x0/0x4ffc00000, data 0x2599bb2/0x2663000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f87e9000/0x0/0x4ffc00000, data 0x2599bb2/0x2663000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1480191 data_alloc: 234881024 data_used: 15818752
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.591730118s of 10.648483276s, submitted: 22
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fcdfbb7a40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce1c6d2c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f87e9000/0x0/0x4ffc00000, data 0x2599bb2/0x2663000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fcdeddd860
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124911616 unmapped: 16269312 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124911616 unmapped: 16269312 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.378582001s of 26.526098251s, submitted: 53
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce19c05a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fce1a01c20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fcde5783c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce112dc20
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce20f7680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 22183936 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 22183936 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323411 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 22183936 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 22175744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9673000/0x0/0x4ffc00000, data 0x1712b1d/0x17d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce1e652c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 22175744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fce1e650e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9673000/0x0/0x4ffc00000, data 0x1712b1d/0x17d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 22175744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce19c03c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fcdf1d6000
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 22151168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326386 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 22151168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125796352 unmapped: 20709376 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365278 data_alloc: 234881024 data_used: 13447168
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365278 data_alloc: 234881024 data_used: 13447168
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.411964417s of 18.474147797s, submitted: 12
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 18644992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91f5000/0x0/0x4ffc00000, data 0x1b8fb2d/0x1c57000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127901696 unmapped: 18604032 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127901696 unmapped: 18604032 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404590 data_alloc: 234881024 data_used: 13590528
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127901696 unmapped: 18604032 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ef000/0x0/0x4ffc00000, data 0x1b95b2d/0x1c5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127909888 unmapped: 18595840 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127909888 unmapped: 18595840 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 18661376 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ef000/0x0/0x4ffc00000, data 0x1b95b2d/0x1c5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 18661376 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403294 data_alloc: 234881024 data_used: 13590528
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403294 data_alloc: 234881024 data_used: 13590528
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 18644992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fcdfea65a0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.284329414s of 15.530404091s, submitted: 41
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212e800 session 0x55fcdeddcb40
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123281408 unmapped: 23224320 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdf1d1680
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2137800 session 0x55fce23bd2c0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce112cf00
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce20f70e0
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.873546600s of 35.962779999s, submitted: 29
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce19fe780
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.201023102s of 21.206556320s, submitted: 1
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288123 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288123 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.126986504s of 10.132149696s, submitted: 1
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289767 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289767 data_alloc: 218103808 data_used: 7614464
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: do_command 'config diff' '{prefix=config diff}'
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: do_command 'config show' '{prefix=config show}'
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.665910721s of 16.765491486s, submitted: 2
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: do_command 'counter dump' '{prefix=counter dump}'
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: do_command 'counter schema' '{prefix=counter schema}'
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 23117824 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123453440 unmapped: 23052288 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:17:30 np0005548915 ceph-osd[82803]: do_command 'log dump' '{prefix=log dump}'
Dec  6 05:17:30 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25712 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:30 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17346 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:30 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26671 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:30 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17364 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:17:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:17:30 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25727 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:30 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 05:17:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  6 05:17:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1474324905' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  6 05:17:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26698 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17385 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25742 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  6 05:17:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/154680080' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  6 05:17:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26722 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17412 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25751 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec  6 05:17:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1911995297' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec  6 05:17:31 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:17:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26737 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:32 np0005548915 nova_compute[254819]: 2025-12-06 10:17:31.998 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:32.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17439 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25763 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:32.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26752 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17454 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25775 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26767 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17475 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Dec  6 05:17:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/547471679' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec  6 05:17:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25787 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec  6 05:17:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4209183984' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec  6 05:17:33 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17487 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:33 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25808 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec  6 05:17:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3062865665' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec  6 05:17:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec  6 05:17:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3270305602' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  6 05:17:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec  6 05:17:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/416495288' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec  6 05:17:33 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:17:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec  6 05:17:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/117394827' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  6 05:17:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:34.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:34 np0005548915 nova_compute[254819]: 2025-12-06 10:17:34.061 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:34.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec  6 05:17:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/619943755' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec  6 05:17:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec  6 05:17:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4152843420' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec  6 05:17:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec  6 05:17:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2882127279' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec  6 05:17:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:17:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec  6 05:17:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1564210752' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec  6 05:17:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec  6 05:17:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4265932852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec  6 05:17:35 np0005548915 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  6 05:17:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec  6 05:17:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/963798927' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec  6 05:17:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  6 05:17:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3486939175' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  6 05:17:35 np0005548915 systemd[1]: Starting Hostname Service...
Dec  6 05:17:35 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:17:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec  6 05:17:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1474186761' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec  6 05:17:35 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26941 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:36.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:36 np0005548915 systemd[1]: Started Hostname Service.
Dec  6 05:17:36 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26947 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:17:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:36.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:17:36 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17619 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:36 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26959 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:36 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26971 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec  6 05:17:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260105720' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec  6 05:17:36 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25940 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:36 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26989 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:36 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17634 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:36 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26986 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:37 np0005548915 nova_compute[254819]: 2025-12-06 10:17:36.999 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25955 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25961 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27001 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17643 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17655 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25979 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27025 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:37.680Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:17:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec  6 05:17:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2917337549' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec  6 05:17:37 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:17:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17670 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27043 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:38.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:38 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25994 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:38.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Dec  6 05:17:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/979905628' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec  6 05:17:38 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17694 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:38 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17700 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:38 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26015 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:38 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17721 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec  6 05:17:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4083141842' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec  6 05:17:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  6 05:17:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  6 05:17:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  6 05:17:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  6 05:17:38 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26027 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:17:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:17:39 np0005548915 nova_compute[254819]: 2025-12-06 10:17:39.066 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:39 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17745 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec  6 05:17:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/966773222' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec  6 05:17:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  6 05:17:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  6 05:17:39 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26045 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  6 05:17:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  6 05:17:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  6 05:17:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  6 05:17:39 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26072 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:39 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:17:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:17:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  6 05:17:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  6 05:17:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:40.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:40.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:40 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27187 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4131632034' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.832853) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016260832933, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2358, "num_deletes": 251, "total_data_size": 4310560, "memory_usage": 4364080, "flush_reason": "Manual Compaction"}
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016260864989, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4202135, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29600, "largest_seqno": 31957, "table_properties": {"data_size": 4191058, "index_size": 6931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 26531, "raw_average_key_size": 21, "raw_value_size": 4167766, "raw_average_value_size": 3421, "num_data_blocks": 296, "num_entries": 1218, "num_filter_entries": 1218, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016059, "oldest_key_time": 1765016059, "file_creation_time": 1765016260, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 32321 microseconds, and 9439 cpu microseconds.
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.865178) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4202135 bytes OK
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.865249) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.867646) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.867663) EVENT_LOG_v1 {"time_micros": 1765016260867658, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.867690) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4300187, prev total WAL file size 4300187, number of live WAL files 2.
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.869294) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(4103KB)], [65(12MB)]
Dec  6 05:17:40 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016260869364, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 17353273, "oldest_snapshot_seqno": -1}
Dec  6 05:17:40 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17835 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:40] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:17:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:40] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6527 keys, 15136219 bytes, temperature: kUnknown
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016261007341, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 15136219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15092775, "index_size": 26054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 167457, "raw_average_key_size": 25, "raw_value_size": 14975382, "raw_average_value_size": 2294, "num_data_blocks": 1047, "num_entries": 6527, "num_filter_entries": 6527, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765016260, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.007581) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 15136219 bytes
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.010307) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.7 rd, 109.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.5 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 7048, records dropped: 521 output_compression: NoCompression
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.010329) EVENT_LOG_v1 {"time_micros": 1765016261010320, "job": 36, "event": "compaction_finished", "compaction_time_micros": 138029, "compaction_time_cpu_micros": 29891, "output_level": 6, "num_output_files": 1, "total_output_size": 15136219, "num_input_records": 7048, "num_output_records": 6527, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016261011220, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016261016417, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.869160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.016534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.016541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.016544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.016547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.016549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2595427889' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec  6 05:17:41 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26150 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Dec  6 05:17:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1895847428' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec  6 05:17:41 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:17:42 np0005548915 nova_compute[254819]: 2025-12-06 10:17:42.002 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:42.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:42.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec  6 05:17:42 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1756315161' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec  6 05:17:42 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27268 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec  6 05:17:42 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1991843052' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec  6 05:17:43 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17889 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec  6 05:17:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121329675' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec  6 05:17:43 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27307 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:43 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26192 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 05:17:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 7075 writes, 31K keys, 7074 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 7075 writes, 7074 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1558 writes, 6972 keys, 1558 commit groups, 1.0 writes per commit group, ingest: 11.87 MB, 0.02 MB/s#012Interval WAL: 1558 writes, 1558 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     90.2      0.56              0.14        18    0.031       0      0       0.0       0.0#012  L6      1/0   14.44 MB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   4.5    103.8     89.7      2.54              0.63        17    0.150     94K   9354       0.0       0.0#012 Sum      1/0   14.44 MB   0.0      0.3     0.0      0.2       0.3      0.1       0.0   5.5     85.1     89.8      3.10              0.77        35    0.089     94K   9354       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.9    111.8    114.6      0.61              0.19         8    0.077     26K   2592       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   0.0    103.8     89.7      2.54              0.63        17    0.150     94K   9354       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     91.2      0.55              0.14        17    0.032       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.049, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.27 GB write, 0.12 MB/s write, 0.26 GB read, 0.11 MB/s read, 3.1 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fd9a571350#2 capacity: 304.00 MB usage: 22.98 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000167 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1400,22.24 MB,7.31473%) FilterBlock(36,275.30 KB,0.0884357%) IndexBlock(36,484.64 KB,0.155685%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 05:17:43 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:17:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:44.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec  6 05:17:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2968187861' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec  6 05:17:44 np0005548915 nova_compute[254819]: 2025-12-06 10:17:44.069 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:17:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:44.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:17:44 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17922 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:44 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27334 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec  6 05:17:44 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259075150' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec  6 05:17:44 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27340 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:44 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:17:45 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26210 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:45 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17943 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:45 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17961 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Dec  6 05:17:45 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3502764016' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec  6 05:17:45 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26228 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:45 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:17:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:46.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Dec  6 05:17:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531041810' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec  6 05:17:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:46.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:46 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26240 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Dec  6 05:17:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2349497416' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec  6 05:17:46 np0005548915 podman[284673]: 2025-12-06 10:17:46.459315711 +0000 UTC m=+0.088309246 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 05:17:46 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17988 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:47 np0005548915 nova_compute[254819]: 2025-12-06 10:17:47.004 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17994 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ovs-appctl[285054]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  6 05:17:47 np0005548915 ovs-appctl[285071]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  6 05:17:47 np0005548915 ovs-appctl[285077]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27397 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Dec  6 05:17:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3610447336' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27409 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:17:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:47.683Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:17:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:47.684Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:17:47 np0005548915 nova_compute[254819]: 2025-12-06 10:17:47.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:17:47 np0005548915 nova_compute[254819]: 2025-12-06 10:17:47.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:17:47 np0005548915 nova_compute[254819]: 2025-12-06 10:17:47.874 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:17:47 np0005548915 nova_compute[254819]: 2025-12-06 10:17:47.874 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:17:47 np0005548915 nova_compute[254819]: 2025-12-06 10:17:47.874 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:17:47 np0005548915 nova_compute[254819]: 2025-12-06 10:17:47.875 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:17:47 np0005548915 nova_compute[254819]: 2025-12-06 10:17:47.875 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:17:47 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:17:47 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Dec  6 05:17:47 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3586252455' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26261 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:48.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:48.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:17:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1740259575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:17:48 np0005548915 nova_compute[254819]: 2025-12-06 10:17:48.347 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18036 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26267 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:17:48 np0005548915 nova_compute[254819]: 2025-12-06 10:17:48.516 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:17:48 np0005548915 nova_compute[254819]: 2025-12-06 10:17:48.517 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4333MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:17:48 np0005548915 nova_compute[254819]: 2025-12-06 10:17:48.518 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:17:48 np0005548915 nova_compute[254819]: 2025-12-06 10:17:48.518 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:17:48 np0005548915 nova_compute[254819]: 2025-12-06 10:17:48.635 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:17:48 np0005548915 nova_compute[254819]: 2025-12-06 10:17:48.635 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:17:48 np0005548915 nova_compute[254819]: 2025-12-06 10:17:48.657 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27448 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:48 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18045 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:49 np0005548915 nova_compute[254819]: 2025-12-06 10:17:49.072 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:17:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4160220492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:17:49 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27469 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:49 np0005548915 nova_compute[254819]: 2025-12-06 10:17:49.169 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:17:49 np0005548915 nova_compute[254819]: 2025-12-06 10:17:49.175 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:17:49 np0005548915 nova_compute[254819]: 2025-12-06 10:17:49.207 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:17:49 np0005548915 nova_compute[254819]: 2025-12-06 10:17:49.208 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:17:49 np0005548915 nova_compute[254819]: 2025-12-06 10:17:49.208 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:17:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec  6 05:17:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/242421932' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  6 05:17:49 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26303 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Dec  6 05:17:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876669399' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec  6 05:17:49 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:17:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:17:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:50.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:50.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:50 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26315 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:17:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Dec  6 05:17:50 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/739045502' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec  6 05:17:50 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18099 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:50 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27514 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:50] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:17:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:50] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:17:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Dec  6 05:17:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3494036148' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec  6 05:17:51 np0005548915 podman[286523]: 2025-12-06 10:17:51.364385175 +0000 UTC m=+0.099199062 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 05:17:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Dec  6 05:17:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3484183676' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec  6 05:17:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Dec  6 05:17:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1987338407' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec  6 05:17:51 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:17:51 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26348 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:52 np0005548915 nova_compute[254819]: 2025-12-06 10:17:52.006 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:52.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Dec  6 05:17:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/130199360' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec  6 05:17:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:52.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Dec  6 05:17:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/860231069' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec  6 05:17:52 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18141 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:52 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27574 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:53 np0005548915 nova_compute[254819]: 2025-12-06 10:17:53.209 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:17:53 np0005548915 nova_compute[254819]: 2025-12-06 10:17:53.210 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:17:53 np0005548915 nova_compute[254819]: 2025-12-06 10:17:53.210 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:17:53 np0005548915 nova_compute[254819]: 2025-12-06 10:17:53.211 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:17:53 np0005548915 nova_compute[254819]: 2025-12-06 10:17:53.228 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 05:17:53 np0005548915 nova_compute[254819]: 2025-12-06 10:17:53.229 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:17:53 np0005548915 nova_compute[254819]: 2025-12-06 10:17:53.229 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:17:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Dec  6 05:17:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1626240640' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec  6 05:17:53 np0005548915 nova_compute[254819]: 2025-12-06 10:17:53.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:17:53 np0005548915 nova_compute[254819]: 2025-12-06 10:17:53.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:17:53 np0005548915 nova_compute[254819]: 2025-12-06 10:17:53.748 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:17:53 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:17:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Dec  6 05:17:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/48299577' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec  6 05:17:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:17:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:17:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:54.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:17:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:17:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:17:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:17:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:17:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:17:54 np0005548915 nova_compute[254819]: 2025-12-06 10:17:54.076 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:54.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:17:54.247 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:17:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:17:54.248 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:17:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:17:54.248 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:17:54 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27604 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:54 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18180 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:54 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26393 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Dec  6 05:17:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211427813' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec  6 05:17:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:17:55 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18198 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:55 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27631 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:55 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18210 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:55 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27643 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:55 np0005548915 nova_compute[254819]: 2025-12-06 10:17:55.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:17:55 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26423 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Dec  6 05:17:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2626797575' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec  6 05:17:55 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:17:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:56.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:56.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Dec  6 05:17:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/812042877' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec  6 05:17:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Dec  6 05:17:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/443912516' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec  6 05:17:56 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26441 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:56 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18255 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:56 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27694 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:57 np0005548915 nova_compute[254819]: 2025-12-06 10:17:57.010 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18267 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26453 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27703 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:17:57 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Dec  6 05:17:57 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1210231069' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec  6 05:17:57 np0005548915 podman[287094]: 2025-12-06 10:17:57.572218669 +0000 UTC m=+0.065699460 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  6 05:17:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:57.685Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:17:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:57.685Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:17:57 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:17:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:17:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:58.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26480 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18303 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:17:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:17:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:58.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27736 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26486 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18315 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:58 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27748 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:58 np0005548915 virtqemud[254445]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  6 05:17:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  6 05:17:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242664383' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  6 05:17:59 np0005548915 nova_compute[254819]: 2025-12-06 10:17:59.080 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:17:59 np0005548915 systemd[1]: Starting Time & Date Service...
Dec  6 05:17:59 np0005548915 systemd[1]: Started Time & Date Service.
Dec  6 05:17:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Dec  6 05:17:59 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/346786115' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec  6 05:17:59 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26507 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:17:59 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:17:59 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:18:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:18:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:00.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:18:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:18:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:00.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:18:00 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26513 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:18:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:00] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:18:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:00] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:18:01 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:18:02 np0005548915 nova_compute[254819]: 2025-12-06 10:18:02.011 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:18:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:18:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:02.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:18:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:18:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:02.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:18:03 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:18:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:18:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:04.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:18:04 np0005548915 nova_compute[254819]: 2025-12-06 10:18:04.084 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:18:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:18:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:04.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:18:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:18:05 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:18:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:18:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:06.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:18:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:18:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:06.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:18:07 np0005548915 nova_compute[254819]: 2025-12-06 10:18:07.012 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:18:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:18:07.685Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:18:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:18:07.686Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:18:07 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:18:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:18:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:08.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:18:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 05:18:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:08.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 05:18:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:18:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:18:09 np0005548915 nova_compute[254819]: 2025-12-06 10:18:09.087 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:18:09 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:18:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:18:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:18:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:10.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:18:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:18:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:10.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:18:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:18:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:18:11 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:18:12 np0005548915 nova_compute[254819]: 2025-12-06 10:18:12.016 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:18:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:18:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:12.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:18:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:18:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:12.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:18:13 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:18:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:18:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:14.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:18:14 np0005548915 nova_compute[254819]: 2025-12-06 10:18:14.090 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:18:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:18:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:14.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:18:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:18:15 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:18:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:18:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:16.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:18:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:18:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:16.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:18:17 np0005548915 nova_compute[254819]: 2025-12-06 10:18:17.017 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:18:17 np0005548915 podman[287689]: 2025-12-06 10:18:17.432547248 +0000 UTC m=+0.065882234 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd)
Dec  6 05:18:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:18:17.687Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:18:17 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:18:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:18:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:18.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:18:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:18:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:18:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:18.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:18:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:18:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:18:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:18:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:18:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:18:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:18:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:18:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:18:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:18:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:18:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:18:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:18:18 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:18:18 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:18:19 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:18:19 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:18:19 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:18:19 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:18:19 np0005548915 podman[287886]: 2025-12-06 10:18:19.059262707 +0000 UTC m=+0.039247600 container create 57fda08169247040546bb04005be122fe76bb36a91f2e1951d545b3ae354976b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  6 05:18:19 np0005548915 nova_compute[254819]: 2025-12-06 10:18:19.094 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:18:19 np0005548915 systemd[1]: Started libpod-conmon-57fda08169247040546bb04005be122fe76bb36a91f2e1951d545b3ae354976b.scope.
Dec  6 05:18:19 np0005548915 podman[287886]: 2025-12-06 10:18:19.041913045 +0000 UTC m=+0.021897958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:18:19 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:21:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1214: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:21:32 np0005548915 nova_compute[254819]: 2025-12-06 10:21:32.100 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:21:32 np0005548915 rsyslogd[1004]: imjournal: 2040 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  6 05:21:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:21:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:32.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:21:32 np0005548915 podman[290897]: 2025-12-06 10:21:32.442987943 +0000 UTC m=+0.065994967 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  6 05:21:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:32.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1215: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:21:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:21:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:34.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:21:34 np0005548915 nova_compute[254819]: 2025-12-06 10:21:34.457 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:21:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:34.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:21:35 np0005548915 nova_compute[254819]: 2025-12-06 10:21:35.339 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:21:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1216: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:21:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:36.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:21:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:36.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:21:37 np0005548915 nova_compute[254819]: 2025-12-06 10:21:37.101 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:21:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:21:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:21:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:21:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:21:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:21:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:21:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:21:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:21:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:21:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:21:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:21:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:21:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:21:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:21:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:21:37.710Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:21:37 np0005548915 podman[291120]: 2025-12-06 10:21:37.870064956 +0000 UTC m=+0.057538678 container create 78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_liskov, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  6 05:21:37 np0005548915 systemd[1]: Started libpod-conmon-78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5.scope.
Dec  6 05:21:37 np0005548915 podman[291120]: 2025-12-06 10:21:37.839569156 +0000 UTC m=+0.027042938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:21:37 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:21:37 np0005548915 podman[291120]: 2025-12-06 10:21:37.957314991 +0000 UTC m=+0.144788723 container init 78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_liskov, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 05:21:37 np0005548915 podman[291120]: 2025-12-06 10:21:37.965893674 +0000 UTC m=+0.153367346 container start 78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_liskov, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:21:37 np0005548915 podman[291120]: 2025-12-06 10:21:37.969129032 +0000 UTC m=+0.156602744 container attach 78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_liskov, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:21:37 np0005548915 elastic_liskov[291137]: 167 167
Dec  6 05:21:37 np0005548915 systemd[1]: libpod-78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5.scope: Deactivated successfully.
Dec  6 05:21:37 np0005548915 conmon[291137]: conmon 78f7dc750d1ece00c243 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5.scope/container/memory.events
Dec  6 05:21:37 np0005548915 podman[291120]: 2025-12-06 10:21:37.97642572 +0000 UTC m=+0.163899402 container died 78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_liskov, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 05:21:38 np0005548915 systemd[1]: var-lib-containers-storage-overlay-9578f9aa551dad06e43a90725efa4304bc1bdd8a9c5ab7752440751f45ba22b1-merged.mount: Deactivated successfully.
Dec  6 05:21:38 np0005548915 podman[291120]: 2025-12-06 10:21:38.030543294 +0000 UTC m=+0.218016996 container remove 78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_liskov, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:21:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1217: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:21:38 np0005548915 systemd[1]: libpod-conmon-78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5.scope: Deactivated successfully.
Dec  6 05:21:38 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:21:38 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:21:38 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:21:38 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:21:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:38.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:38 np0005548915 podman[291160]: 2025-12-06 10:21:38.289265946 +0000 UTC m=+0.063977363 container create a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:21:38 np0005548915 systemd[1]: Started libpod-conmon-a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612.scope.
Dec  6 05:21:38 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:21:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d344afdf5bcdb51426d7bd2772343a9f37649641ce2cab8f7ac2b3d5cfcc37c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:21:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d344afdf5bcdb51426d7bd2772343a9f37649641ce2cab8f7ac2b3d5cfcc37c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:21:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d344afdf5bcdb51426d7bd2772343a9f37649641ce2cab8f7ac2b3d5cfcc37c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:21:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d344afdf5bcdb51426d7bd2772343a9f37649641ce2cab8f7ac2b3d5cfcc37c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:21:38 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d344afdf5bcdb51426d7bd2772343a9f37649641ce2cab8f7ac2b3d5cfcc37c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:21:38 np0005548915 podman[291160]: 2025-12-06 10:21:38.268417749 +0000 UTC m=+0.043129196 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:21:38 np0005548915 podman[291160]: 2025-12-06 10:21:38.376436859 +0000 UTC m=+0.151148366 container init a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mirzakhani, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:21:38 np0005548915 podman[291160]: 2025-12-06 10:21:38.392187738 +0000 UTC m=+0.166899155 container start a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mirzakhani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  6 05:21:38 np0005548915 podman[291160]: 2025-12-06 10:21:38.396298359 +0000 UTC m=+0.171009776 container attach a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mirzakhani, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:21:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:38.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:38 np0005548915 naughty_mirzakhani[291176]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:21:38 np0005548915 naughty_mirzakhani[291176]: --> All data devices are unavailable
Dec  6 05:21:38 np0005548915 systemd[1]: libpod-a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612.scope: Deactivated successfully.
Dec  6 05:21:38 np0005548915 conmon[291176]: conmon a834bc48ef3a7bb7632d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612.scope/container/memory.events
Dec  6 05:21:38 np0005548915 podman[291160]: 2025-12-06 10:21:38.793325836 +0000 UTC m=+0.568037293 container died a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:21:38 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6d344afdf5bcdb51426d7bd2772343a9f37649641ce2cab8f7ac2b3d5cfcc37c-merged.mount: Deactivated successfully.
Dec  6 05:21:38 np0005548915 podman[291160]: 2025-12-06 10:21:38.840345786 +0000 UTC m=+0.615057193 container remove a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  6 05:21:38 np0005548915 systemd[1]: libpod-conmon-a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612.scope: Deactivated successfully.
Dec  6 05:21:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:21:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:21:39 np0005548915 podman[291296]: 2025-12-06 10:21:39.364710129 +0000 UTC m=+0.042673892 container create dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lichterman, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:21:39 np0005548915 systemd[1]: Started libpod-conmon-dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616.scope.
Dec  6 05:21:39 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:21:39 np0005548915 podman[291296]: 2025-12-06 10:21:39.345532067 +0000 UTC m=+0.023495830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:21:39 np0005548915 podman[291296]: 2025-12-06 10:21:39.444338496 +0000 UTC m=+0.122302269 container init dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lichterman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:21:39 np0005548915 podman[291296]: 2025-12-06 10:21:39.45033481 +0000 UTC m=+0.128298553 container start dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lichterman, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Dec  6 05:21:39 np0005548915 podman[291296]: 2025-12-06 10:21:39.454442572 +0000 UTC m=+0.132406645 container attach dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lichterman, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:21:39 np0005548915 gallant_lichterman[291313]: 167 167
Dec  6 05:21:39 np0005548915 systemd[1]: libpod-dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616.scope: Deactivated successfully.
Dec  6 05:21:39 np0005548915 podman[291296]: 2025-12-06 10:21:39.456828457 +0000 UTC m=+0.134792220 container died dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lichterman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  6 05:21:39 np0005548915 nova_compute[254819]: 2025-12-06 10:21:39.461 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:21:39 np0005548915 systemd[1]: var-lib-containers-storage-overlay-15c3a16704ee358001658fc16cdc7ee50faf400914ce2a38f3a086fb7b9b980e-merged.mount: Deactivated successfully.
Dec  6 05:21:39 np0005548915 podman[291296]: 2025-12-06 10:21:39.493986898 +0000 UTC m=+0.171950641 container remove dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lichterman, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Dec  6 05:21:39 np0005548915 systemd[1]: libpod-conmon-dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616.scope: Deactivated successfully.
Dec  6 05:21:39 np0005548915 podman[291336]: 2025-12-06 10:21:39.66707973 +0000 UTC m=+0.048152982 container create 5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ritchie, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  6 05:21:39 np0005548915 systemd[1]: Started libpod-conmon-5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e.scope.
Dec  6 05:21:39 np0005548915 podman[291336]: 2025-12-06 10:21:39.644718561 +0000 UTC m=+0.025791823 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:21:39 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:21:39 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e8a15da444f9409b64cc59c3fa3e8b41633563fb9fc508475ca376bdeb3f28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:21:39 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e8a15da444f9409b64cc59c3fa3e8b41633563fb9fc508475ca376bdeb3f28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:21:39 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e8a15da444f9409b64cc59c3fa3e8b41633563fb9fc508475ca376bdeb3f28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:21:39 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e8a15da444f9409b64cc59c3fa3e8b41633563fb9fc508475ca376bdeb3f28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:21:39 np0005548915 podman[291336]: 2025-12-06 10:21:39.76481866 +0000 UTC m=+0.145891962 container init 5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ritchie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  6 05:21:39 np0005548915 podman[291336]: 2025-12-06 10:21:39.77585329 +0000 UTC m=+0.156926512 container start 5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:21:39 np0005548915 podman[291336]: 2025-12-06 10:21:39.779761246 +0000 UTC m=+0.160834548 container attach 5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ritchie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  6 05:21:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]: {
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:    "1": [
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:        {
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:            "devices": [
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:                "/dev/loop3"
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:            ],
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:            "lv_name": "ceph_lv0",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:            "lv_size": "21470642176",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:            "name": "ceph_lv0",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:            "tags": {
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:                "ceph.cluster_name": "ceph",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:                "ceph.crush_device_class": "",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:                "ceph.encrypted": "0",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:                "ceph.osd_id": "1",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:                "ceph.type": "block",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:                "ceph.vdo": "0",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:                "ceph.with_tpm": "0"
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:            },
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:            "type": "block",
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:            "vg_name": "ceph_vg0"
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:        }
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]:    ]
Dec  6 05:21:40 np0005548915 strange_ritchie[291354]: }
Dec  6 05:21:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1218: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:21:40 np0005548915 systemd[1]: libpod-5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e.scope: Deactivated successfully.
Dec  6 05:21:40 np0005548915 podman[291336]: 2025-12-06 10:21:40.067888929 +0000 UTC m=+0.448962141 container died 5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:21:40 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d7e8a15da444f9409b64cc59c3fa3e8b41633563fb9fc508475ca376bdeb3f28-merged.mount: Deactivated successfully.
Dec  6 05:21:40 np0005548915 podman[291336]: 2025-12-06 10:21:40.128553011 +0000 UTC m=+0.509626223 container remove 5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ritchie, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  6 05:21:40 np0005548915 systemd[1]: libpod-conmon-5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e.scope: Deactivated successfully.
Dec  6 05:21:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:40.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:40.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:40 np0005548915 podman[291467]: 2025-12-06 10:21:40.723989568 +0000 UTC m=+0.042955610 container create 4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_feynman, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  6 05:21:40 np0005548915 systemd[1]: Started libpod-conmon-4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab.scope.
Dec  6 05:21:40 np0005548915 podman[291467]: 2025-12-06 10:21:40.704322133 +0000 UTC m=+0.023288215 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:21:40 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:21:40 np0005548915 podman[291467]: 2025-12-06 10:21:40.827915507 +0000 UTC m=+0.146881649 container init 4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 05:21:40 np0005548915 podman[291467]: 2025-12-06 10:21:40.836603173 +0000 UTC m=+0.155569215 container start 4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_feynman, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:21:40 np0005548915 podman[291467]: 2025-12-06 10:21:40.839987776 +0000 UTC m=+0.158953928 container attach 4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:21:40 np0005548915 pedantic_feynman[291483]: 167 167
Dec  6 05:21:40 np0005548915 systemd[1]: libpod-4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab.scope: Deactivated successfully.
Dec  6 05:21:40 np0005548915 conmon[291483]: conmon 4d303a67b5a01721ecc5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab.scope/container/memory.events
Dec  6 05:21:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:40] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:21:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:40] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:21:40 np0005548915 podman[291488]: 2025-12-06 10:21:40.906971529 +0000 UTC m=+0.042872808 container died 4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:21:40 np0005548915 systemd[1]: var-lib-containers-storage-overlay-570760ce12b8d998561e5c4629b2712bfa8dfb55a6763b545e981e73579a2ff0-merged.mount: Deactivated successfully.
Dec  6 05:21:40 np0005548915 podman[291488]: 2025-12-06 10:21:40.950068832 +0000 UTC m=+0.085970091 container remove 4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:21:40 np0005548915 systemd[1]: libpod-conmon-4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab.scope: Deactivated successfully.
Dec  6 05:21:41 np0005548915 podman[291510]: 2025-12-06 10:21:41.167792499 +0000 UTC m=+0.049851219 container create 9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:21:41 np0005548915 systemd[1]: Started libpod-conmon-9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10.scope.
Dec  6 05:21:41 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:21:41 np0005548915 podman[291510]: 2025-12-06 10:21:41.14836755 +0000 UTC m=+0.030426260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:21:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a2f3c9586531edc962846228337e11e163769b5fb26187b6afc4be00d195d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:21:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a2f3c9586531edc962846228337e11e163769b5fb26187b6afc4be00d195d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:21:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a2f3c9586531edc962846228337e11e163769b5fb26187b6afc4be00d195d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:21:41 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a2f3c9586531edc962846228337e11e163769b5fb26187b6afc4be00d195d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:21:41 np0005548915 podman[291510]: 2025-12-06 10:21:41.26080155 +0000 UTC m=+0.142860330 container init 9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:21:41 np0005548915 podman[291510]: 2025-12-06 10:21:41.274891494 +0000 UTC m=+0.156950194 container start 9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Dec  6 05:21:41 np0005548915 podman[291510]: 2025-12-06 10:21:41.278860981 +0000 UTC m=+0.160919701 container attach 9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  6 05:21:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1219: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:21:42 np0005548915 lvm[291602]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:21:42 np0005548915 lvm[291602]: VG ceph_vg0 finished
Dec  6 05:21:42 np0005548915 friendly_wilbur[291526]: {}
Dec  6 05:21:42 np0005548915 nova_compute[254819]: 2025-12-06 10:21:42.105 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:21:42 np0005548915 systemd[1]: libpod-9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10.scope: Deactivated successfully.
Dec  6 05:21:42 np0005548915 systemd[1]: libpod-9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10.scope: Consumed 1.435s CPU time.
Dec  6 05:21:42 np0005548915 podman[291510]: 2025-12-06 10:21:42.138178402 +0000 UTC m=+1.020237082 container died 9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Dec  6 05:21:42 np0005548915 systemd[1]: var-lib-containers-storage-overlay-49a2f3c9586531edc962846228337e11e163769b5fb26187b6afc4be00d195d9-merged.mount: Deactivated successfully.
Dec  6 05:21:42 np0005548915 podman[291510]: 2025-12-06 10:21:42.185554121 +0000 UTC m=+1.067612801 container remove 9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:21:42 np0005548915 systemd[1]: libpod-conmon-9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10.scope: Deactivated successfully.
Dec  6 05:21:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:21:42 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:21:42 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:21:42 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:21:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:21:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:42.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:21:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:42.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:43 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:21:43 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:21:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1220: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:21:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:44.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:44 np0005548915 nova_compute[254819]: 2025-12-06 10:21:44.464 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:21:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:44.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:21:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1221: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:21:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:46.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:47 np0005548915 nova_compute[254819]: 2025-12-06 10:21:47.109 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:21:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:21:47.711Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:21:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:21:47.711Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:21:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:21:47.711Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:21:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1222: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:21:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:48.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:48.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:49 np0005548915 nova_compute[254819]: 2025-12-06 10:21:49.468 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:21:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:21:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1223: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:21:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:50.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:50.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:50 np0005548915 nova_compute[254819]: 2025-12-06 10:21:50.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:21:50 np0005548915 nova_compute[254819]: 2025-12-06 10:21:50.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:21:50 np0005548915 nova_compute[254819]: 2025-12-06 10:21:50.801 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:21:50 np0005548915 nova_compute[254819]: 2025-12-06 10:21:50.801 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:21:50 np0005548915 nova_compute[254819]: 2025-12-06 10:21:50.802 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:21:50 np0005548915 nova_compute[254819]: 2025-12-06 10:21:50.802 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:21:50 np0005548915 nova_compute[254819]: 2025-12-06 10:21:50.803 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:21:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:50] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:21:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:50] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:21:51 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:21:51 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3012262960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:21:51 np0005548915 nova_compute[254819]: 2025-12-06 10:21:51.316 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:21:51 np0005548915 nova_compute[254819]: 2025-12-06 10:21:51.489 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:21:51 np0005548915 nova_compute[254819]: 2025-12-06 10:21:51.491 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4444MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:21:51 np0005548915 nova_compute[254819]: 2025-12-06 10:21:51.491 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:21:51 np0005548915 nova_compute[254819]: 2025-12-06 10:21:51.492 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:21:51 np0005548915 nova_compute[254819]: 2025-12-06 10:21:51.578 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:21:51 np0005548915 nova_compute[254819]: 2025-12-06 10:21:51.579 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:21:51 np0005548915 nova_compute[254819]: 2025-12-06 10:21:51.596 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:21:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:21:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/514334911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:21:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1224: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:21:52 np0005548915 nova_compute[254819]: 2025-12-06 10:21:52.058 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:21:52 np0005548915 nova_compute[254819]: 2025-12-06 10:21:52.064 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:21:52 np0005548915 nova_compute[254819]: 2025-12-06 10:21:52.082 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:21:52 np0005548915 nova_compute[254819]: 2025-12-06 10:21:52.083 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:21:52 np0005548915 nova_compute[254819]: 2025-12-06 10:21:52.083 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:21:52 np0005548915 nova_compute[254819]: 2025-12-06 10:21:52.111 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:21:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:21:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:52.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:21:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:52.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:53 np0005548915 podman[291699]: 2025-12-06 10:21:53.429264462 +0000 UTC m=+0.053333903 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 05:21:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:21:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:21:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:21:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:21:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:21:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:21:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:21:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:21:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1225: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:21:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:21:54.249 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:21:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:21:54.250 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:21:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:21:54.250 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:21:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:21:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:54.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:21:54 np0005548915 nova_compute[254819]: 2025-12-06 10:21:54.473 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:21:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:21:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:54.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:21:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:21:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1226: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:21:56 np0005548915 nova_compute[254819]: 2025-12-06 10:21:56.084 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:21:56 np0005548915 nova_compute[254819]: 2025-12-06 10:21:56.085 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:21:56 np0005548915 nova_compute[254819]: 2025-12-06 10:21:56.085 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:21:56 np0005548915 nova_compute[254819]: 2025-12-06 10:21:56.085 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:21:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:56.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:56.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:56 np0005548915 nova_compute[254819]: 2025-12-06 10:21:56.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:21:56 np0005548915 nova_compute[254819]: 2025-12-06 10:21:56.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:21:56 np0005548915 nova_compute[254819]: 2025-12-06 10:21:56.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:21:56 np0005548915 nova_compute[254819]: 2025-12-06 10:21:56.773 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 05:21:57 np0005548915 nova_compute[254819]: 2025-12-06 10:21:57.113 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:21:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:21:57.713Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:21:57 np0005548915 nova_compute[254819]: 2025-12-06 10:21:57.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:21:57 np0005548915 nova_compute[254819]: 2025-12-06 10:21:57.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:21:57 np0005548915 nova_compute[254819]: 2025-12-06 10:21:57.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:21:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1227: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:21:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:21:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:58.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:21:58 np0005548915 podman[291750]: 2025-12-06 10:21:58.478644955 +0000 UTC m=+0.106085439 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  6 05:21:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:21:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:21:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:58.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:21:59 np0005548915 nova_compute[254819]: 2025-12-06 10:21:59.475 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:22:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1228: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:22:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:22:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:00.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:22:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:22:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:00.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:22:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec  6 05:22:00 np0005548915 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec  6 05:22:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:00] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:22:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:00] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:22:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1229: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:22:02 np0005548915 nova_compute[254819]: 2025-12-06 10:22:02.114 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:02.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:02.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:03 np0005548915 podman[291780]: 2025-12-06 10:22:03.4539136 +0000 UTC m=+0.082613739 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 05:22:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1230: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 47 op/s
Dec  6 05:22:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:22:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:04.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:22:04 np0005548915 nova_compute[254819]: 2025-12-06 10:22:04.479 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:22:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:04.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:22:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:22:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1231: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Dec  6 05:22:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:06.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:06.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:07 np0005548915 nova_compute[254819]: 2025-12-06 10:22:07.116 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:22:07.714Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:22:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1232: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 138 op/s
Dec  6 05:22:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:08.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:08.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:22:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:22:09 np0005548915 nova_compute[254819]: 2025-12-06 10:22:09.483 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:22:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1233: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Dec  6 05:22:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:10.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:10.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:10] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:22:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:10] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:22:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1234: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Dec  6 05:22:12 np0005548915 nova_compute[254819]: 2025-12-06 10:22:12.117 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:12.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:12.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:12 np0005548915 nova_compute[254819]: 2025-12-06 10:22:12.806 254824 DEBUG oslo_concurrency.processutils [None req-1b326720-1719-4a67-9e7f-ab0eb7cb97ad bcb29c3303b24519a22c267aaed79458 3e0ab101ca7547d4a515169a0f2edef3 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:22:12 np0005548915 nova_compute[254819]: 2025-12-06 10:22:12.838 254824 DEBUG oslo_concurrency.processutils [None req-1b326720-1719-4a67-9e7f-ab0eb7cb97ad bcb29c3303b24519a22c267aaed79458 3e0ab101ca7547d4a515169a0f2edef3 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:22:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1235: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 138 op/s
Dec  6 05:22:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:14.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:14 np0005548915 nova_compute[254819]: 2025-12-06 10:22:14.485 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:14.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:22:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1236: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Dec  6 05:22:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:16.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:16.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:17 np0005548915 nova_compute[254819]: 2025-12-06 10:22:17.118 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:22:17.715Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:22:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1237: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Dec  6 05:22:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:18.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:18.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:19 np0005548915 nova_compute[254819]: 2025-12-06 10:22:19.538 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:19 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:22:19.799 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 05:22:19 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:22:19.800 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 05:22:19 np0005548915 nova_compute[254819]: 2025-12-06 10:22:19.801 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:22:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1238: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:22:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:20.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:20.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:20] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:22:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:20] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec  6 05:22:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1239: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:22:22 np0005548915 nova_compute[254819]: 2025-12-06 10:22:22.155 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:22:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:22.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:22:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:22.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:22:23
Dec  6 05:22:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:22:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:22:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.nfs', 'volumes', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.meta', '.rgw.root', 'default.rgw.control']
Dec  6 05:22:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:22:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:22:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1240: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:22:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:24.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:24 np0005548915 podman[291850]: 2025-12-06 10:22:24.451954257 +0000 UTC m=+0.076336161 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:22:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:24.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:22:24 np0005548915 nova_compute[254819]: 2025-12-06 10:22:24.590 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:22:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:22:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:22:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1241: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:22:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:26.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:26.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:27 np0005548915 nova_compute[254819]: 2025-12-06 10:22:27.156 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:22:27.716Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:22:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1242: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:22:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:28.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:28.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:29 np0005548915 podman[291874]: 2025-12-06 10:22:29.508068341 +0000 UTC m=+0.137856426 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec  6 05:22:29 np0005548915 nova_compute[254819]: 2025-12-06 10:22:29.591 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:29 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:22:29.801 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 05:22:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:22:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1243: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:22:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:22:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:30.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:22:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:30.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:30] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:22:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:30] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:22:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1244: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:22:32 np0005548915 nova_compute[254819]: 2025-12-06 10:22:32.158 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:22:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:32.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:22:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:32.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1245: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:22:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:34.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:34 np0005548915 podman[291908]: 2025-12-06 10:22:34.457149141 +0000 UTC m=+0.081342567 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 05:22:34 np0005548915 nova_compute[254819]: 2025-12-06 10:22:34.594 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:34.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:22:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1246: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:22:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:22:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:36.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:22:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:36.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:37 np0005548915 nova_compute[254819]: 2025-12-06 10:22:37.161 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:22:37.717Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:22:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1247: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:22:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:38.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:22:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:38.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:22:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:22:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:22:39 np0005548915 nova_compute[254819]: 2025-12-06 10:22:39.598 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:22:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1248: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:22:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:40.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:22:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:40.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:22:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:40] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:22:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:40] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:22:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1249: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:22:42 np0005548915 nova_compute[254819]: 2025-12-06 10:22:42.163 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:22:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:42.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:22:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:22:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:42.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:22:43 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:22:44 np0005548915 podman[292135]: 2025-12-06 10:22:44.032406828 +0000 UTC m=+0.059522292 container create 84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:22:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1250: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:22:44 np0005548915 systemd[1]: Started libpod-conmon-84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c.scope.
Dec  6 05:22:44 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:22:44 np0005548915 podman[292135]: 2025-12-06 10:22:44.015670242 +0000 UTC m=+0.042785726 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:22:44 np0005548915 podman[292135]: 2025-12-06 10:22:44.128468065 +0000 UTC m=+0.155583619 container init 84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:22:44 np0005548915 podman[292135]: 2025-12-06 10:22:44.135368332 +0000 UTC m=+0.162483836 container start 84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  6 05:22:44 np0005548915 podman[292135]: 2025-12-06 10:22:44.139893076 +0000 UTC m=+0.167008580 container attach 84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_grothendieck, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  6 05:22:44 np0005548915 blissful_grothendieck[292151]: 167 167
Dec  6 05:22:44 np0005548915 systemd[1]: libpod-84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c.scope: Deactivated successfully.
Dec  6 05:22:44 np0005548915 podman[292135]: 2025-12-06 10:22:44.143610247 +0000 UTC m=+0.170725711 container died 84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_grothendieck, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  6 05:22:44 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6c978dee7ec5b5dea5d0b51b0cd77d49c2e7ae6d3bce5163c24dbb4f85ac72fd-merged.mount: Deactivated successfully.
Dec  6 05:22:44 np0005548915 podman[292135]: 2025-12-06 10:22:44.194862673 +0000 UTC m=+0.221978167 container remove 84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_grothendieck, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  6 05:22:44 np0005548915 systemd[1]: libpod-conmon-84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c.scope: Deactivated successfully.
Dec  6 05:22:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:44.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:44 np0005548915 podman[292175]: 2025-12-06 10:22:44.398274923 +0000 UTC m=+0.059690246 container create 795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  6 05:22:44 np0005548915 systemd[1]: Started libpod-conmon-795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec.scope.
Dec  6 05:22:44 np0005548915 podman[292175]: 2025-12-06 10:22:44.373969261 +0000 UTC m=+0.035384624 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:22:44 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:22:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567acb5db928477ec85263cc50d2a3039e377eac459439137e4c5ed70d549283/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:22:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567acb5db928477ec85263cc50d2a3039e377eac459439137e4c5ed70d549283/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:22:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567acb5db928477ec85263cc50d2a3039e377eac459439137e4c5ed70d549283/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:22:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567acb5db928477ec85263cc50d2a3039e377eac459439137e4c5ed70d549283/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:22:44 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567acb5db928477ec85263cc50d2a3039e377eac459439137e4c5ed70d549283/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:22:44 np0005548915 podman[292175]: 2025-12-06 10:22:44.492438497 +0000 UTC m=+0.153853820 container init 795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_boyd, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:22:44 np0005548915 podman[292175]: 2025-12-06 10:22:44.505246236 +0000 UTC m=+0.166661549 container start 795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  6 05:22:44 np0005548915 podman[292175]: 2025-12-06 10:22:44.508762692 +0000 UTC m=+0.170178005 container attach 795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 05:22:44 np0005548915 nova_compute[254819]: 2025-12-06 10:22:44.601 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:44.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:44 np0005548915 eloquent_boyd[292191]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:22:44 np0005548915 eloquent_boyd[292191]: --> All data devices are unavailable
Dec  6 05:22:44 np0005548915 systemd[1]: libpod-795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec.scope: Deactivated successfully.
Dec  6 05:22:44 np0005548915 podman[292175]: 2025-12-06 10:22:44.825405366 +0000 UTC m=+0.486820719 container died 795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:22:44 np0005548915 systemd[1]: var-lib-containers-storage-overlay-567acb5db928477ec85263cc50d2a3039e377eac459439137e4c5ed70d549283-merged.mount: Deactivated successfully.
Dec  6 05:22:44 np0005548915 podman[292175]: 2025-12-06 10:22:44.87291342 +0000 UTC m=+0.534328753 container remove 795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_boyd, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  6 05:22:44 np0005548915 systemd[1]: libpod-conmon-795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec.scope: Deactivated successfully.
Dec  6 05:22:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:22:45 np0005548915 podman[292312]: 2025-12-06 10:22:45.447275813 +0000 UTC m=+0.057686772 container create 66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:22:45 np0005548915 systemd[1]: Started libpod-conmon-66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e.scope.
Dec  6 05:22:45 np0005548915 podman[292312]: 2025-12-06 10:22:45.418309144 +0000 UTC m=+0.028720153 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:22:45 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:22:45 np0005548915 podman[292312]: 2025-12-06 10:22:45.557221058 +0000 UTC m=+0.167631997 container init 66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 05:22:45 np0005548915 podman[292312]: 2025-12-06 10:22:45.565101232 +0000 UTC m=+0.175512151 container start 66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:22:45 np0005548915 podman[292312]: 2025-12-06 10:22:45.56831698 +0000 UTC m=+0.178727899 container attach 66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:22:45 np0005548915 suspicious_chatterjee[292328]: 167 167
Dec  6 05:22:45 np0005548915 systemd[1]: libpod-66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e.scope: Deactivated successfully.
Dec  6 05:22:45 np0005548915 podman[292312]: 2025-12-06 10:22:45.573542842 +0000 UTC m=+0.183953771 container died 66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:22:45 np0005548915 systemd[1]: var-lib-containers-storage-overlay-ac49f563a00066ac8ba994b762ae954377197e937cf2375760c977d2ec60990d-merged.mount: Deactivated successfully.
Dec  6 05:22:45 np0005548915 podman[292312]: 2025-12-06 10:22:45.614282552 +0000 UTC m=+0.224693471 container remove 66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Dec  6 05:22:45 np0005548915 systemd[1]: libpod-conmon-66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e.scope: Deactivated successfully.
Dec  6 05:22:45 np0005548915 podman[292356]: 2025-12-06 10:22:45.816600402 +0000 UTC m=+0.052397419 container create 2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  6 05:22:45 np0005548915 systemd[1]: Started libpod-conmon-2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27.scope.
Dec  6 05:22:45 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:22:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e3da0447bdc094d58d4ea8eb6d1124e231aea96f1ca869915c530df83cc7d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:22:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e3da0447bdc094d58d4ea8eb6d1124e231aea96f1ca869915c530df83cc7d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:22:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e3da0447bdc094d58d4ea8eb6d1124e231aea96f1ca869915c530df83cc7d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:22:45 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e3da0447bdc094d58d4ea8eb6d1124e231aea96f1ca869915c530df83cc7d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:22:45 np0005548915 podman[292356]: 2025-12-06 10:22:45.797569603 +0000 UTC m=+0.033366660 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:22:45 np0005548915 podman[292356]: 2025-12-06 10:22:45.894167714 +0000 UTC m=+0.129964761 container init 2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  6 05:22:45 np0005548915 podman[292356]: 2025-12-06 10:22:45.902114751 +0000 UTC m=+0.137911778 container start 2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 05:22:45 np0005548915 podman[292356]: 2025-12-06 10:22:45.905442662 +0000 UTC m=+0.141239839 container attach 2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 05:22:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  6 05:22:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/142882542' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 05:22:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  6 05:22:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/142882542' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 05:22:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1251: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:22:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:46.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:46.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]: {
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:    "1": [
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:        {
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:            "devices": [
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:                "/dev/loop3"
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:            ],
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:            "lv_name": "ceph_lv0",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:            "lv_size": "21470642176",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:            "name": "ceph_lv0",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:            "tags": {
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:                "ceph.cluster_name": "ceph",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:                "ceph.crush_device_class": "",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:                "ceph.encrypted": "0",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:                "ceph.osd_id": "1",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:                "ceph.type": "block",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:                "ceph.vdo": "0",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:                "ceph.with_tpm": "0"
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:            },
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:            "type": "block",
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:            "vg_name": "ceph_vg0"
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:        }
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]:    ]
Dec  6 05:22:46 np0005548915 sweet_yonath[292373]: }
Dec  6 05:22:46 np0005548915 systemd[1]: libpod-2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27.scope: Deactivated successfully.
Dec  6 05:22:46 np0005548915 conmon[292373]: conmon 2475192e9b4ed289b9ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27.scope/container/memory.events
Dec  6 05:22:46 np0005548915 podman[292356]: 2025-12-06 10:22:46.656957699 +0000 UTC m=+0.892754706 container died 2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  6 05:22:46 np0005548915 systemd[1]: var-lib-containers-storage-overlay-22e3da0447bdc094d58d4ea8eb6d1124e231aea96f1ca869915c530df83cc7d6-merged.mount: Deactivated successfully.
Dec  6 05:22:46 np0005548915 podman[292356]: 2025-12-06 10:22:46.691929722 +0000 UTC m=+0.927726749 container remove 2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_yonath, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  6 05:22:46 np0005548915 systemd[1]: libpod-conmon-2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27.scope: Deactivated successfully.
Dec  6 05:22:47 np0005548915 nova_compute[254819]: 2025-12-06 10:22:47.164 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:47 np0005548915 podman[292484]: 2025-12-06 10:22:47.262619114 +0000 UTC m=+0.035321322 container create 2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:22:47 np0005548915 systemd[1]: Started libpod-conmon-2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0.scope.
Dec  6 05:22:47 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:22:47 np0005548915 podman[292484]: 2025-12-06 10:22:47.335697985 +0000 UTC m=+0.108400193 container init 2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:22:47 np0005548915 podman[292484]: 2025-12-06 10:22:47.247349619 +0000 UTC m=+0.020051847 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:22:47 np0005548915 podman[292484]: 2025-12-06 10:22:47.343412615 +0000 UTC m=+0.116114813 container start 2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:22:47 np0005548915 podman[292484]: 2025-12-06 10:22:47.347319342 +0000 UTC m=+0.120021560 container attach 2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_yonath, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:22:47 np0005548915 sleepy_yonath[292500]: 167 167
Dec  6 05:22:47 np0005548915 systemd[1]: libpod-2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0.scope: Deactivated successfully.
Dec  6 05:22:47 np0005548915 podman[292484]: 2025-12-06 10:22:47.349662525 +0000 UTC m=+0.122364723 container died 2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_yonath, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  6 05:22:47 np0005548915 systemd[1]: var-lib-containers-storage-overlay-debdef475b0e690a29c01b9b2594a9f2ae63902259fec544a5b852540cbdceed-merged.mount: Deactivated successfully.
Dec  6 05:22:47 np0005548915 podman[292484]: 2025-12-06 10:22:47.382183241 +0000 UTC m=+0.154885429 container remove 2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_yonath, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  6 05:22:47 np0005548915 systemd[1]: libpod-conmon-2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0.scope: Deactivated successfully.
Dec  6 05:22:47 np0005548915 podman[292524]: 2025-12-06 10:22:47.52972379 +0000 UTC m=+0.035556140 container create 806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:22:47 np0005548915 systemd[1]: Started libpod-conmon-806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6.scope.
Dec  6 05:22:47 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:22:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37fa194333a646538adb5def8f0391b2f4b32df7024cfed355743dfcf4b4d92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:22:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37fa194333a646538adb5def8f0391b2f4b32df7024cfed355743dfcf4b4d92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:22:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37fa194333a646538adb5def8f0391b2f4b32df7024cfed355743dfcf4b4d92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:22:47 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37fa194333a646538adb5def8f0391b2f4b32df7024cfed355743dfcf4b4d92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:22:47 np0005548915 podman[292524]: 2025-12-06 10:22:47.597615199 +0000 UTC m=+0.103447549 container init 806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mahavira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 05:22:47 np0005548915 podman[292524]: 2025-12-06 10:22:47.608895816 +0000 UTC m=+0.114728166 container start 806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mahavira, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:22:47 np0005548915 podman[292524]: 2025-12-06 10:22:47.514362901 +0000 UTC m=+0.020195271 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:22:47 np0005548915 podman[292524]: 2025-12-06 10:22:47.611732913 +0000 UTC m=+0.117565263 container attach 806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  6 05:22:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:22:47.718Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:22:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1252: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:22:48 np0005548915 lvm[292617]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:22:48 np0005548915 lvm[292617]: VG ceph_vg0 finished
Dec  6 05:22:48 np0005548915 beautiful_mahavira[292541]: {}
Dec  6 05:22:48 np0005548915 systemd[1]: libpod-806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6.scope: Deactivated successfully.
Dec  6 05:22:48 np0005548915 systemd[1]: libpod-806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6.scope: Consumed 1.131s CPU time.
Dec  6 05:22:48 np0005548915 conmon[292541]: conmon 806b83a3b5f4ca871133 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6.scope/container/memory.events
Dec  6 05:22:48 np0005548915 podman[292524]: 2025-12-06 10:22:48.299734321 +0000 UTC m=+0.805566671 container died 806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  6 05:22:48 np0005548915 systemd[1]: var-lib-containers-storage-overlay-b37fa194333a646538adb5def8f0391b2f4b32df7024cfed355743dfcf4b4d92-merged.mount: Deactivated successfully.
Dec  6 05:22:48 np0005548915 podman[292524]: 2025-12-06 10:22:48.334222801 +0000 UTC m=+0.840055151 container remove 806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mahavira, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:22:48 np0005548915 systemd[1]: libpod-conmon-806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6.scope: Deactivated successfully.
Dec  6 05:22:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:48.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:22:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:22:48 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:22:48 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:22:48 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:22:48 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:22:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:48.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:49 np0005548915 nova_compute[254819]: 2025-12-06 10:22:49.647 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:22:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1253: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:22:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:22:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:50.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:22:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:22:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:50.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:22:50 np0005548915 nova_compute[254819]: 2025-12-06 10:22:50.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:22:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:50] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:22:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:50] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:22:51 np0005548915 nova_compute[254819]: 2025-12-06 10:22:51.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:22:51 np0005548915 nova_compute[254819]: 2025-12-06 10:22:51.781 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:22:51 np0005548915 nova_compute[254819]: 2025-12-06 10:22:51.782 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:22:51 np0005548915 nova_compute[254819]: 2025-12-06 10:22:51.782 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:22:51 np0005548915 nova_compute[254819]: 2025-12-06 10:22:51.782 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:22:51 np0005548915 nova_compute[254819]: 2025-12-06 10:22:51.782 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:22:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1254: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:22:52 np0005548915 nova_compute[254819]: 2025-12-06 10:22:52.166 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:52 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:22:52 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784602875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:22:52 np0005548915 nova_compute[254819]: 2025-12-06 10:22:52.236 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:22:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:52.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:52 np0005548915 nova_compute[254819]: 2025-12-06 10:22:52.452 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:22:52 np0005548915 nova_compute[254819]: 2025-12-06 10:22:52.453 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4425MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:22:52 np0005548915 nova_compute[254819]: 2025-12-06 10:22:52.453 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:22:52 np0005548915 nova_compute[254819]: 2025-12-06 10:22:52.454 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:22:52 np0005548915 nova_compute[254819]: 2025-12-06 10:22:52.535 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:22:52 np0005548915 nova_compute[254819]: 2025-12-06 10:22:52.536 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:22:52 np0005548915 nova_compute[254819]: 2025-12-06 10:22:52.558 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:22:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:52.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:22:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605113535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:22:53 np0005548915 nova_compute[254819]: 2025-12-06 10:22:53.026 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:22:53 np0005548915 nova_compute[254819]: 2025-12-06 10:22:53.034 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:22:53 np0005548915 nova_compute[254819]: 2025-12-06 10:22:53.056 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:22:53 np0005548915 nova_compute[254819]: 2025-12-06 10:22:53.059 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:22:53 np0005548915 nova_compute[254819]: 2025-12-06 10:22:53.060 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:22:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:22:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:22:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:22:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:22:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:22:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:22:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:22:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:22:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1255: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:22:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:22:54.250 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:22:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:22:54.251 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:22:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:22:54.251 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:22:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:22:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:54.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:22:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:54.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:54 np0005548915 nova_compute[254819]: 2025-12-06 10:22:54.649 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:22:55 np0005548915 podman[292712]: 2025-12-06 10:22:55.448652206 +0000 UTC m=+0.083971299 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec  6 05:22:56 np0005548915 nova_compute[254819]: 2025-12-06 10:22:56.061 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:22:56 np0005548915 nova_compute[254819]: 2025-12-06 10:22:56.062 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:22:56 np0005548915 nova_compute[254819]: 2025-12-06 10:22:56.062 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:22:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1256: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:22:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:56.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:22:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:56.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:22:56 np0005548915 nova_compute[254819]: 2025-12-06 10:22:56.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:22:56 np0005548915 nova_compute[254819]: 2025-12-06 10:22:56.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:22:56 np0005548915 nova_compute[254819]: 2025-12-06 10:22:56.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:22:56 np0005548915 nova_compute[254819]: 2025-12-06 10:22:56.771 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 05:22:57 np0005548915 nova_compute[254819]: 2025-12-06 10:22:57.168 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:22:57.720Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:22:57 np0005548915 nova_compute[254819]: 2025-12-06 10:22:57.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:22:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1257: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:22:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:22:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:58.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:22:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:22:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:22:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:58.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:22:58 np0005548915 nova_compute[254819]: 2025-12-06 10:22:58.742 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:22:58 np0005548915 nova_compute[254819]: 2025-12-06 10:22:58.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:22:58 np0005548915 nova_compute[254819]: 2025-12-06 10:22:58.748 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:22:59 np0005548915 nova_compute[254819]: 2025-12-06 10:22:59.651 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:22:59 np0005548915 nova_compute[254819]: 2025-12-06 10:22:59.742 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:23:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:23:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1258: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:00.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:00 np0005548915 podman[292764]: 2025-12-06 10:23:00.507555748 +0000 UTC m=+0.126044314 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  6 05:23:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:00.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  6 05:23:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  6 05:23:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1259: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:02 np0005548915 nova_compute[254819]: 2025-12-06 10:23:02.180 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:23:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:02.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:23:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:02.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:23:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1293102009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:23:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1260: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:23:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:04.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:04.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:04 np0005548915 nova_compute[254819]: 2025-12-06 10:23:04.700 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:23:05 np0005548915 podman[292794]: 2025-12-06 10:23:05.430724333 +0000 UTC m=+0.064858057 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 05:23:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1261: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:06.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:06.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:07 np0005548915 nova_compute[254819]: 2025-12-06 10:23:07.182 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:23:07.722Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:23:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1262: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:23:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:08.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:08.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:23:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:23:09 np0005548915 nova_compute[254819]: 2025-12-06 10:23:09.703 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:23:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1263: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:10.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:10.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:23:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:23:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1264: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:12 np0005548915 nova_compute[254819]: 2025-12-06 10:23:12.210 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:12.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:12.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1265: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:23:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:23:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:14.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:23:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:14.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:14 np0005548915 nova_compute[254819]: 2025-12-06 10:23:14.751 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:23:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1266: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:16.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:16.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:17 np0005548915 nova_compute[254819]: 2025-12-06 10:23:17.250 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:23:17.723Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:23:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1267: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:23:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:18.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:23:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:18.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:23:19 np0005548915 nova_compute[254819]: 2025-12-06 10:23:19.803 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:23:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1268: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:20.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:23:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:20.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:23:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:20] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:23:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:20] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  6 05:23:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1269: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:22 np0005548915 nova_compute[254819]: 2025-12-06 10:23:22.315 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:23:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:22.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:23:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:23:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:22.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:23:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:23:23
Dec  6 05:23:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:23:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:23:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['.nfs', '.mgr', 'default.rgw.log', 'backups', '.rgw.root', 'images', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms']
Dec  6 05:23:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:23:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:23:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1270: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:23:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:23:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:24.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:23:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:23:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:24.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:24 np0005548915 nova_compute[254819]: 2025-12-06 10:23:24.806 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:23:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1271: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:26.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:26 np0005548915 podman[292860]: 2025-12-06 10:23:26.432319831 +0000 UTC m=+0.062987307 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 05:23:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:26.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:27 np0005548915 nova_compute[254819]: 2025-12-06 10:23:27.318 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:23:27.725Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:23:27 np0005548915 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec  6 05:23:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1272: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:23:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:23:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:28.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:23:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:28.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:29 np0005548915 nova_compute[254819]: 2025-12-06 10:23:29.841 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:23:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1273: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:30.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:30.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:30] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  6 05:23:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:30] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  6 05:23:31 np0005548915 podman[292884]: 2025-12-06 10:23:31.490296597 +0000 UTC m=+0.113856842 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 05:23:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1274: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:32 np0005548915 nova_compute[254819]: 2025-12-06 10:23:32.320 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:32.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:32.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1275: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:23:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:34.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:34.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:34 np0005548915 nova_compute[254819]: 2025-12-06 10:23:34.878 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:23:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1276: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:23:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:36.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:23:36 np0005548915 podman[292916]: 2025-12-06 10:23:36.423371624 +0000 UTC m=+0.055605636 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:23:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:36.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:37 np0005548915 nova_compute[254819]: 2025-12-06 10:23:37.375 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:23:37.726Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:23:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1277: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:23:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:38.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:38.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:23:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:23:39 np0005548915 nova_compute[254819]: 2025-12-06 10:23:39.881 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:23:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1278: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:23:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:40.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:23:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:40.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:40] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:23:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:40] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:23:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1279: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:42 np0005548915 nova_compute[254819]: 2025-12-06 10:23:42.409 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:42.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:42.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1280: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:23:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:23:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:44.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:23:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:44.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:44 np0005548915 nova_compute[254819]: 2025-12-06 10:23:44.921 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:23:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  6 05:23:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1770852714' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 05:23:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  6 05:23:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1770852714' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 05:23:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1281: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:46.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:46.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:47 np0005548915 nova_compute[254819]: 2025-12-06 10:23:47.452 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:23:47.727Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:23:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1282: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:23:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:48.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:48.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:23:49 np0005548915 nova_compute[254819]: 2025-12-06 10:23:49.925 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:23:49 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:23:50 np0005548915 podman[293145]: 2025-12-06 10:23:50.058972195 +0000 UTC m=+0.039440545 container create e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.079657) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016630079694, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1539, "num_deletes": 250, "total_data_size": 2795645, "memory_usage": 2853456, "flush_reason": "Manual Compaction"}
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016630095998, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 2747624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34792, "largest_seqno": 36330, "table_properties": {"data_size": 2740571, "index_size": 4060, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 13926, "raw_average_key_size": 18, "raw_value_size": 2726508, "raw_average_value_size": 3669, "num_data_blocks": 178, "num_entries": 743, "num_filter_entries": 743, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016481, "oldest_key_time": 1765016481, "file_creation_time": 1765016630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 16380 microseconds, and 5194 cpu microseconds.
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:23:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1283: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:50 np0005548915 systemd[1]: Started libpod-conmon-e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf.scope.
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.096038) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 2747624 bytes OK
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.096056) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.103668) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.103683) EVENT_LOG_v1 {"time_micros": 1765016630103678, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.103698) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2789098, prev total WAL file size 2789098, number of live WAL files 2.
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.104674) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323533' seq:72057594037927935, type:22 .. '6B7600353034' seq:0, type:0; will stop at (end)
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2683KB)], [74(12MB)]
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016630104705, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 15892564, "oldest_snapshot_seqno": -1}
Dec  6 05:23:50 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:23:50 np0005548915 podman[293145]: 2025-12-06 10:23:50.042249181 +0000 UTC m=+0.022717541 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6850 keys, 14489121 bytes, temperature: kUnknown
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016630227653, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 14489121, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14444812, "index_size": 26085, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 178738, "raw_average_key_size": 26, "raw_value_size": 14322951, "raw_average_value_size": 2090, "num_data_blocks": 1032, "num_entries": 6850, "num_filter_entries": 6850, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765016630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.228126) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 14489121 bytes
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.230166) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.1 rd, 117.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 12.5 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(11.1) write-amplify(5.3) OK, records in: 7364, records dropped: 514 output_compression: NoCompression
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.230197) EVENT_LOG_v1 {"time_micros": 1765016630230183, "job": 42, "event": "compaction_finished", "compaction_time_micros": 123061, "compaction_time_cpu_micros": 29297, "output_level": 6, "num_output_files": 1, "total_output_size": 14489121, "num_input_records": 7364, "num_output_records": 6850, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016630231195, "job": 42, "event": "table_file_deletion", "file_number": 76}
Dec  6 05:23:50 np0005548915 podman[293145]: 2025-12-06 10:23:50.231518165 +0000 UTC m=+0.211986605 container init e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016630235913, "job": 42, "event": "table_file_deletion", "file_number": 74}
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.104599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.236000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.236009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.236012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.236015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:23:50 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.236020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:23:50 np0005548915 podman[293145]: 2025-12-06 10:23:50.238213747 +0000 UTC m=+0.218682097 container start e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  6 05:23:50 np0005548915 podman[293145]: 2025-12-06 10:23:50.241076616 +0000 UTC m=+0.221545056 container attach e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  6 05:23:50 np0005548915 adoring_davinci[293161]: 167 167
Dec  6 05:23:50 np0005548915 systemd[1]: libpod-e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf.scope: Deactivated successfully.
Dec  6 05:23:50 np0005548915 podman[293145]: 2025-12-06 10:23:50.246967216 +0000 UTC m=+0.227435606 container died e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 05:23:50 np0005548915 systemd[1]: var-lib-containers-storage-overlay-b5d419014c9c7b5aa1e3b96c6211e17983c8f09231652b463e8943b3c8431531-merged.mount: Deactivated successfully.
Dec  6 05:23:50 np0005548915 podman[293145]: 2025-12-06 10:23:50.299892207 +0000 UTC m=+0.280360567 container remove e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:23:50 np0005548915 systemd[1]: libpod-conmon-e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf.scope: Deactivated successfully.
Dec  6 05:23:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:50.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:50 np0005548915 podman[293187]: 2025-12-06 10:23:50.479433688 +0000 UTC m=+0.050809886 container create 55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:23:50 np0005548915 systemd[1]: Started libpod-conmon-55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342.scope.
Dec  6 05:23:50 np0005548915 podman[293187]: 2025-12-06 10:23:50.463962446 +0000 UTC m=+0.035338664 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:23:50 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:23:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584bc8de0e3dcc1fc56fe09be1c06172292c914286c9db79ffc8cd90ca66aa43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:23:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584bc8de0e3dcc1fc56fe09be1c06172292c914286c9db79ffc8cd90ca66aa43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:23:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584bc8de0e3dcc1fc56fe09be1c06172292c914286c9db79ffc8cd90ca66aa43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:23:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584bc8de0e3dcc1fc56fe09be1c06172292c914286c9db79ffc8cd90ca66aa43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:23:50 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584bc8de0e3dcc1fc56fe09be1c06172292c914286c9db79ffc8cd90ca66aa43/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:23:50 np0005548915 podman[293187]: 2025-12-06 10:23:50.591576562 +0000 UTC m=+0.162952780 container init 55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 05:23:50 np0005548915 podman[293187]: 2025-12-06 10:23:50.604711429 +0000 UTC m=+0.176087667 container start 55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  6 05:23:50 np0005548915 podman[293187]: 2025-12-06 10:23:50.611754891 +0000 UTC m=+0.183131099 container attach 55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 05:23:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:50.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:50] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:23:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:50] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:23:50 np0005548915 inspiring_panini[293203]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:23:50 np0005548915 inspiring_panini[293203]: --> All data devices are unavailable
Dec  6 05:23:50 np0005548915 systemd[1]: libpod-55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342.scope: Deactivated successfully.
Dec  6 05:23:50 np0005548915 podman[293187]: 2025-12-06 10:23:50.971344314 +0000 UTC m=+0.542720552 container died 55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  6 05:23:51 np0005548915 systemd[1]: var-lib-containers-storage-overlay-584bc8de0e3dcc1fc56fe09be1c06172292c914286c9db79ffc8cd90ca66aa43-merged.mount: Deactivated successfully.
Dec  6 05:23:51 np0005548915 podman[293187]: 2025-12-06 10:23:51.036361425 +0000 UTC m=+0.607737623 container remove 55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:23:51 np0005548915 systemd[1]: libpod-conmon-55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342.scope: Deactivated successfully.
Dec  6 05:23:51 np0005548915 podman[293319]: 2025-12-06 10:23:51.607784858 +0000 UTC m=+0.039326402 container create b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 05:23:51 np0005548915 systemd[1]: Started libpod-conmon-b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451.scope.
Dec  6 05:23:51 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:23:51 np0005548915 podman[293319]: 2025-12-06 10:23:51.590862767 +0000 UTC m=+0.022404341 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:23:51 np0005548915 podman[293319]: 2025-12-06 10:23:51.686649337 +0000 UTC m=+0.118190901 container init b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  6 05:23:51 np0005548915 podman[293319]: 2025-12-06 10:23:51.694271484 +0000 UTC m=+0.125813028 container start b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:23:51 np0005548915 podman[293319]: 2025-12-06 10:23:51.697672736 +0000 UTC m=+0.129214280 container attach b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:23:51 np0005548915 exciting_northcutt[293335]: 167 167
Dec  6 05:23:51 np0005548915 systemd[1]: libpod-b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451.scope: Deactivated successfully.
Dec  6 05:23:51 np0005548915 nova_compute[254819]: 2025-12-06 10:23:51.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:23:51 np0005548915 podman[293342]: 2025-12-06 10:23:51.754962347 +0000 UTC m=+0.036416893 container died b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  6 05:23:51 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a707ff5d741dd0bd73e723b37d97bd3b82f62d64fe69bbc22d43ead87f476c9a-merged.mount: Deactivated successfully.
Dec  6 05:23:51 np0005548915 podman[293342]: 2025-12-06 10:23:51.790836634 +0000 UTC m=+0.072291170 container remove b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:23:51 np0005548915 systemd[1]: libpod-conmon-b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451.scope: Deactivated successfully.
Dec  6 05:23:51 np0005548915 podman[293364]: 2025-12-06 10:23:51.98054763 +0000 UTC m=+0.041310305 container create 16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_euclid, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  6 05:23:52 np0005548915 systemd[1]: Started libpod-conmon-16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e.scope.
Dec  6 05:23:52 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:23:52 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f27b8828d6561b5a3aba794ecefe2473c19c22c55f8b80f596cef6722791d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:23:52 np0005548915 podman[293364]: 2025-12-06 10:23:51.962932501 +0000 UTC m=+0.023695186 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:23:52 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f27b8828d6561b5a3aba794ecefe2473c19c22c55f8b80f596cef6722791d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:23:52 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f27b8828d6561b5a3aba794ecefe2473c19c22c55f8b80f596cef6722791d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:23:52 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f27b8828d6561b5a3aba794ecefe2473c19c22c55f8b80f596cef6722791d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:23:52 np0005548915 podman[293364]: 2025-12-06 10:23:52.068035394 +0000 UTC m=+0.128798109 container init 16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  6 05:23:52 np0005548915 podman[293364]: 2025-12-06 10:23:52.079912447 +0000 UTC m=+0.140675132 container start 16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:23:52 np0005548915 podman[293364]: 2025-12-06 10:23:52.083644648 +0000 UTC m=+0.144407323 container attach 16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_euclid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:23:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1284: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:52 np0005548915 boring_euclid[293380]: {
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:    "1": [
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:        {
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:            "devices": [
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:                "/dev/loop3"
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:            ],
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:            "lv_name": "ceph_lv0",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:            "lv_size": "21470642176",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:            "name": "ceph_lv0",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:            "tags": {
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:                "ceph.cluster_name": "ceph",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:                "ceph.crush_device_class": "",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:                "ceph.encrypted": "0",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:                "ceph.osd_id": "1",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:                "ceph.type": "block",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:                "ceph.vdo": "0",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:                "ceph.with_tpm": "0"
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:            },
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:            "type": "block",
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:            "vg_name": "ceph_vg0"
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:        }
Dec  6 05:23:52 np0005548915 boring_euclid[293380]:    ]
Dec  6 05:23:52 np0005548915 boring_euclid[293380]: }
Dec  6 05:23:52 np0005548915 systemd[1]: libpod-16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e.scope: Deactivated successfully.
Dec  6 05:23:52 np0005548915 podman[293364]: 2025-12-06 10:23:52.371099648 +0000 UTC m=+0.431862313 container died 16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_euclid, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  6 05:23:52 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f4f27b8828d6561b5a3aba794ecefe2473c19c22c55f8b80f596cef6722791d9-merged.mount: Deactivated successfully.
Dec  6 05:23:52 np0005548915 podman[293364]: 2025-12-06 10:23:52.41893307 +0000 UTC m=+0.479695735 container remove 16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_euclid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  6 05:23:52 np0005548915 systemd[1]: libpod-conmon-16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e.scope: Deactivated successfully.
Dec  6 05:23:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:52.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:52 np0005548915 nova_compute[254819]: 2025-12-06 10:23:52.456 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:52.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:52 np0005548915 nova_compute[254819]: 2025-12-06 10:23:52.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:23:52 np0005548915 nova_compute[254819]: 2025-12-06 10:23:52.779 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:23:52 np0005548915 nova_compute[254819]: 2025-12-06 10:23:52.779 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:23:52 np0005548915 nova_compute[254819]: 2025-12-06 10:23:52.779 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:23:52 np0005548915 nova_compute[254819]: 2025-12-06 10:23:52.779 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:23:52 np0005548915 nova_compute[254819]: 2025-12-06 10:23:52.780 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:23:53 np0005548915 podman[293512]: 2025-12-06 10:23:53.022411636 +0000 UTC m=+0.053970300 container create 9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ardinghelli, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:23:53 np0005548915 systemd[1]: Started libpod-conmon-9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26.scope.
Dec  6 05:23:53 np0005548915 podman[293512]: 2025-12-06 10:23:52.99241057 +0000 UTC m=+0.023969274 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:23:53 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:23:53 np0005548915 podman[293512]: 2025-12-06 10:23:53.10771122 +0000 UTC m=+0.139269884 container init 9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ardinghelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  6 05:23:53 np0005548915 podman[293512]: 2025-12-06 10:23:53.114514535 +0000 UTC m=+0.146073149 container start 9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:23:53 np0005548915 podman[293512]: 2025-12-06 10:23:53.118786572 +0000 UTC m=+0.150345286 container attach 9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  6 05:23:53 np0005548915 systemd[1]: libpod-9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26.scope: Deactivated successfully.
Dec  6 05:23:53 np0005548915 condescending_ardinghelli[293528]: 167 167
Dec  6 05:23:53 np0005548915 conmon[293528]: conmon 9ac7fd11fd123aa5a98f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26.scope/container/memory.events
Dec  6 05:23:53 np0005548915 podman[293512]: 2025-12-06 10:23:53.122123282 +0000 UTC m=+0.153681926 container died 9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ardinghelli, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:23:53 np0005548915 systemd[1]: var-lib-containers-storage-overlay-580e7b07db2c4681058c9940fa9c8dfbb2e02452fcfc5ca3a316ca7fbfa3e689-merged.mount: Deactivated successfully.
Dec  6 05:23:53 np0005548915 podman[293512]: 2025-12-06 10:23:53.161158335 +0000 UTC m=+0.192716959 container remove 9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  6 05:23:53 np0005548915 systemd[1]: libpod-conmon-9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26.scope: Deactivated successfully.
Dec  6 05:23:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:23:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252620075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:23:53 np0005548915 nova_compute[254819]: 2025-12-06 10:23:53.237 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:23:53 np0005548915 podman[293552]: 2025-12-06 10:23:53.412736298 +0000 UTC m=+0.057947930 container create 111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_solomon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:23:53 np0005548915 nova_compute[254819]: 2025-12-06 10:23:53.435 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:23:53 np0005548915 nova_compute[254819]: 2025-12-06 10:23:53.437 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4445MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:23:53 np0005548915 nova_compute[254819]: 2025-12-06 10:23:53.438 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:23:53 np0005548915 nova_compute[254819]: 2025-12-06 10:23:53.438 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:23:53 np0005548915 systemd[1]: Started libpod-conmon-111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e.scope.
Dec  6 05:23:53 np0005548915 podman[293552]: 2025-12-06 10:23:53.387086559 +0000 UTC m=+0.032298231 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:23:53 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:23:53 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9637a56520efda796f0016ab1fffa37434efd649b203d37dadb9013ce382367/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:23:53 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9637a56520efda796f0016ab1fffa37434efd649b203d37dadb9013ce382367/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:23:53 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9637a56520efda796f0016ab1fffa37434efd649b203d37dadb9013ce382367/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:23:53 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9637a56520efda796f0016ab1fffa37434efd649b203d37dadb9013ce382367/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:23:53 np0005548915 podman[293552]: 2025-12-06 10:23:53.507702044 +0000 UTC m=+0.152913706 container init 111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_solomon, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  6 05:23:53 np0005548915 nova_compute[254819]: 2025-12-06 10:23:53.511 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:23:53 np0005548915 nova_compute[254819]: 2025-12-06 10:23:53.512 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:23:53 np0005548915 podman[293552]: 2025-12-06 10:23:53.516304088 +0000 UTC m=+0.161515720 container start 111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_solomon, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Dec  6 05:23:53 np0005548915 podman[293552]: 2025-12-06 10:23:53.519752422 +0000 UTC m=+0.164964044 container attach 111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_solomon, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:23:53 np0005548915 nova_compute[254819]: 2025-12-06 10:23:53.529 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:23:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:23:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3964484987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:23:53 np0005548915 nova_compute[254819]: 2025-12-06 10:23:53.972 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:23:53 np0005548915 nova_compute[254819]: 2025-12-06 10:23:53.978 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:23:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:23:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:23:53 np0005548915 nova_compute[254819]: 2025-12-06 10:23:53.998 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:23:53 np0005548915 nova_compute[254819]: 2025-12-06 10:23:53.999 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:23:54 np0005548915 nova_compute[254819]: 2025-12-06 10:23:53.999 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:23:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:23:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:23:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:23:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:23:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:23:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:23:54 np0005548915 lvm[293666]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:23:54 np0005548915 lvm[293666]: VG ceph_vg0 finished
Dec  6 05:23:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1285: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:23:54 np0005548915 determined_solomon[293568]: {}
Dec  6 05:23:54 np0005548915 systemd[1]: libpod-111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e.scope: Deactivated successfully.
Dec  6 05:23:54 np0005548915 podman[293669]: 2025-12-06 10:23:54.234361935 +0000 UTC m=+0.027063949 container died 111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  6 05:23:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:23:54.251 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:23:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:23:54.252 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:23:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:23:54.253 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:23:54 np0005548915 systemd[1]: var-lib-containers-storage-overlay-d9637a56520efda796f0016ab1fffa37434efd649b203d37dadb9013ce382367-merged.mount: Deactivated successfully.
Dec  6 05:23:54 np0005548915 podman[293669]: 2025-12-06 10:23:54.276075711 +0000 UTC m=+0.068777705 container remove 111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_solomon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  6 05:23:54 np0005548915 systemd[1]: libpod-conmon-111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e.scope: Deactivated successfully.
Dec  6 05:23:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:23:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:23:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:23:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:23:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:54.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:23:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:54.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:23:54 np0005548915 nova_compute[254819]: 2025-12-06 10:23:54.929 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:23:55 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:23:55 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:23:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1286: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:23:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:23:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:56.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:23:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:56.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:57 np0005548915 nova_compute[254819]: 2025-12-06 10:23:57.000 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:23:57 np0005548915 nova_compute[254819]: 2025-12-06 10:23:57.001 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:23:57 np0005548915 nova_compute[254819]: 2025-12-06 10:23:57.494 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:23:57 np0005548915 podman[293734]: 2025-12-06 10:23:57.498257268 +0000 UTC m=+0.111950140 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 05:23:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:23:57.728Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:23:57 np0005548915 nova_compute[254819]: 2025-12-06 10:23:57.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:23:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1287: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:23:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:23:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:58.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:23:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:23:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:23:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:58.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:23:58 np0005548915 nova_compute[254819]: 2025-12-06 10:23:58.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:23:58 np0005548915 nova_compute[254819]: 2025-12-06 10:23:58.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:23:58 np0005548915 nova_compute[254819]: 2025-12-06 10:23:58.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:23:58 np0005548915 nova_compute[254819]: 2025-12-06 10:23:58.766 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 05:23:58 np0005548915 nova_compute[254819]: 2025-12-06 10:23:58.767 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:23:58 np0005548915 nova_compute[254819]: 2025-12-06 10:23:58.767 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:23:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=cleanup t=2025-12-06T10:23:59.60479229Z level=info msg="Completed cleanup jobs" duration=41.225113ms
Dec  6 05:23:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafana.update.checker t=2025-12-06T10:23:59.724261084Z level=info msg="Update check succeeded" duration=58.671018ms
Dec  6 05:23:59 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=plugins.update.checker t=2025-12-06T10:23:59.726778553Z level=info msg="Update check succeeded" duration=99.954433ms
Dec  6 05:23:59 np0005548915 nova_compute[254819]: 2025-12-06 10:23:59.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:23:59 np0005548915 nova_compute[254819]: 2025-12-06 10:23:59.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:23:59 np0005548915 nova_compute[254819]: 2025-12-06 10:23:59.933 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:24:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1288: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:00.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:00.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:00] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:24:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:00] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:24:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1289: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:02.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:02 np0005548915 podman[293762]: 2025-12-06 10:24:02.451627256 +0000 UTC m=+0.086099816 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec  6 05:24:02 np0005548915 nova_compute[254819]: 2025-12-06 10:24:02.497 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:02.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1290: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:24:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:04.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:04.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:04 np0005548915 nova_compute[254819]: 2025-12-06 10:24:04.974 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:24:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1291: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:06.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:06.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:07 np0005548915 podman[293792]: 2025-12-06 10:24:07.456090056 +0000 UTC m=+0.085054158 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:24:07 np0005548915 nova_compute[254819]: 2025-12-06 10:24:07.500 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:07.731Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:24:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1292: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:24:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:08.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:08.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:24:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:24:09 np0005548915 nova_compute[254819]: 2025-12-06 10:24:09.979 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:24:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1293: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:10.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:10.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:10] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:24:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:10] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:24:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1294: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:12.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:12 np0005548915 nova_compute[254819]: 2025-12-06 10:24:12.500 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:12.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1295: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:24:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:14.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:14.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:14 np0005548915 nova_compute[254819]: 2025-12-06 10:24:14.983 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:24:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1296: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:24:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:16.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:24:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:16.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:17 np0005548915 nova_compute[254819]: 2025-12-06 10:24:17.549 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:17.732Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:24:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:17.733Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:24:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1297: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:24:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:18.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:18.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:19 np0005548915 nova_compute[254819]: 2025-12-06 10:24:19.987 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:24:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1298: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:20.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:20.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:24:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:24:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1299: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:22.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:22 np0005548915 nova_compute[254819]: 2025-12-06 10:24:22.552 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:22.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:24:23
Dec  6 05:24:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:24:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:24:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['.rgw.root', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'backups', '.nfs', 'volumes', 'default.rgw.control', 'images']
Dec  6 05:24:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:24:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:24:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1300: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:24:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:24.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:24:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:24:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:24.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:24 np0005548915 nova_compute[254819]: 2025-12-06 10:24:24.991 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:24:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1301: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:26.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:26.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:27 np0005548915 nova_compute[254819]: 2025-12-06 10:24:27.553 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:27.733Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:24:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:27.733Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:24:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:27.734Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:24:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1302: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:24:28 np0005548915 podman[293859]: 2025-12-06 10:24:28.461228853 +0000 UTC m=+0.078995232 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:24:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:28.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:28.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:29 np0005548915 nova_compute[254819]: 2025-12-06 10:24:29.993 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:24:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1303: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:30.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:30.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:30] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  6 05:24:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:30] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  6 05:24:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1304: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:32.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:32 np0005548915 nova_compute[254819]: 2025-12-06 10:24:32.555 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:32.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:33 np0005548915 podman[293883]: 2025-12-06 10:24:33.52613076 +0000 UTC m=+0.147335244 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 05:24:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1305: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:24:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:34.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:34.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:34 np0005548915 nova_compute[254819]: 2025-12-06 10:24:34.995 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:24:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1306: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:36.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:36.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:37 np0005548915 nova_compute[254819]: 2025-12-06 10:24:37.557 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:37 np0005548915 podman[293938]: 2025-12-06 10:24:37.619818092 +0000 UTC m=+0.063909141 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  6 05:24:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:37.735Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:24:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1307: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:24:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:38.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:38.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:24:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:24:39 np0005548915 nova_compute[254819]: 2025-12-06 10:24:39.997 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:24:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1308: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:40.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:40.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:24:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:24:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1309: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:24:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:42.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:24:42 np0005548915 nova_compute[254819]: 2025-12-06 10:24:42.560 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:42.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.204461) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016683204540, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 700, "num_deletes": 251, "total_data_size": 1026848, "memory_usage": 1041304, "flush_reason": "Manual Compaction"}
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016683216878, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1017152, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36331, "largest_seqno": 37030, "table_properties": {"data_size": 1013510, "index_size": 1486, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8250, "raw_average_key_size": 19, "raw_value_size": 1006253, "raw_average_value_size": 2362, "num_data_blocks": 65, "num_entries": 426, "num_filter_entries": 426, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016631, "oldest_key_time": 1765016631, "file_creation_time": 1765016683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 12458 microseconds, and 5661 cpu microseconds.
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.216942) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1017152 bytes OK
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.216974) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.218369) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.218393) EVENT_LOG_v1 {"time_micros": 1765016683218385, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.218416) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1023281, prev total WAL file size 1023281, number of live WAL files 2.
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.219429) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(993KB)], [77(13MB)]
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016683219567, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 15506273, "oldest_snapshot_seqno": -1}
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6762 keys, 13325396 bytes, temperature: kUnknown
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016683326336, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 13325396, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13282783, "index_size": 24581, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16965, "raw_key_size": 177577, "raw_average_key_size": 26, "raw_value_size": 13163509, "raw_average_value_size": 1946, "num_data_blocks": 963, "num_entries": 6762, "num_filter_entries": 6762, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765016683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.326666) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 13325396 bytes
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.327643) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.1 rd, 124.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.8 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(28.3) write-amplify(13.1) OK, records in: 7276, records dropped: 514 output_compression: NoCompression
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.327665) EVENT_LOG_v1 {"time_micros": 1765016683327655, "job": 44, "event": "compaction_finished", "compaction_time_micros": 106847, "compaction_time_cpu_micros": 47748, "output_level": 6, "num_output_files": 1, "total_output_size": 13325396, "num_input_records": 7276, "num_output_records": 6762, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016683328020, "job": 44, "event": "table_file_deletion", "file_number": 79}
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016683331639, "job": 44, "event": "table_file_deletion", "file_number": 77}
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.219244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.331759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.331765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.331766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.331768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:24:43 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.331769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:24:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1310: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:24:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:44.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:44.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:45 np0005548915 nova_compute[254819]: 2025-12-06 10:24:45.001 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:24:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1311: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:46.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:46.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:47 np0005548915 nova_compute[254819]: 2025-12-06 10:24:47.563 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:47.736Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:24:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:47.736Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:24:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1312: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:24:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:48.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:48.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:50 np0005548915 nova_compute[254819]: 2025-12-06 10:24:50.063 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:24:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1313: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:50.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:50.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:24:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:24:51 np0005548915 nova_compute[254819]: 2025-12-06 10:24:51.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:24:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1314: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:52.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:52 np0005548915 nova_compute[254819]: 2025-12-06 10:24:52.565 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:52.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:53 np0005548915 nova_compute[254819]: 2025-12-06 10:24:53.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:24:53 np0005548915 nova_compute[254819]: 2025-12-06 10:24:53.784 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:24:53 np0005548915 nova_compute[254819]: 2025-12-06 10:24:53.784 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:24:53 np0005548915 nova_compute[254819]: 2025-12-06 10:24:53.785 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:24:53 np0005548915 nova_compute[254819]: 2025-12-06 10:24:53.785 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:24:53 np0005548915 nova_compute[254819]: 2025-12-06 10:24:53.785 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:24:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:24:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:24:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:24:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:24:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:24:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:24:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:24:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:24:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1315: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:24:54 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:24:54 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3597878080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:24:54 np0005548915 nova_compute[254819]: 2025-12-06 10:24:54.243 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:24:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:24:54.251 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:24:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:24:54.252 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:24:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:24:54.252 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:24:54 np0005548915 nova_compute[254819]: 2025-12-06 10:24:54.435 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:24:54 np0005548915 nova_compute[254819]: 2025-12-06 10:24:54.437 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4497MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:24:54 np0005548915 nova_compute[254819]: 2025-12-06 10:24:54.437 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:24:54 np0005548915 nova_compute[254819]: 2025-12-06 10:24:54.438 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:24:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:54.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:54 np0005548915 nova_compute[254819]: 2025-12-06 10:24:54.525 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:24:54 np0005548915 nova_compute[254819]: 2025-12-06 10:24:54.525 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:24:54 np0005548915 nova_compute[254819]: 2025-12-06 10:24:54.697 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:24:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:54.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:55 np0005548915 nova_compute[254819]: 2025-12-06 10:24:55.066 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2258271433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:24:55 np0005548915 nova_compute[254819]: 2025-12-06 10:24:55.226 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:24:55 np0005548915 nova_compute[254819]: 2025-12-06 10:24:55.233 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:24:55 np0005548915 nova_compute[254819]: 2025-12-06 10:24:55.249 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:24:55 np0005548915 nova_compute[254819]: 2025-12-06 10:24:55.251 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:24:55 np0005548915 nova_compute[254819]: 2025-12-06 10:24:55.251 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:24:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:24:55 np0005548915 podman[294191]: 2025-12-06 10:24:55.937628568 +0000 UTC m=+0.053603791 container create 8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_austin, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  6 05:24:55 np0005548915 systemd[1]: Started libpod-conmon-8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727.scope.
Dec  6 05:24:56 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:24:56 np0005548915 podman[294191]: 2025-12-06 10:24:55.920201533 +0000 UTC m=+0.036176776 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:24:56 np0005548915 podman[294191]: 2025-12-06 10:24:56.021751809 +0000 UTC m=+0.137727072 container init 8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  6 05:24:56 np0005548915 podman[294191]: 2025-12-06 10:24:56.032349038 +0000 UTC m=+0.148324301 container start 8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:24:56 np0005548915 podman[294191]: 2025-12-06 10:24:56.036424288 +0000 UTC m=+0.152399541 container attach 8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:24:56 np0005548915 bold_austin[294207]: 167 167
Dec  6 05:24:56 np0005548915 systemd[1]: libpod-8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727.scope: Deactivated successfully.
Dec  6 05:24:56 np0005548915 podman[294191]: 2025-12-06 10:24:56.042623507 +0000 UTC m=+0.158598730 container died 8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_austin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  6 05:24:56 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7df519bce77be089ab7fca6d7690e505db477ad4562bb7d76608657c34d9785f-merged.mount: Deactivated successfully.
Dec  6 05:24:56 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 05:24:56 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:24:56 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:24:56 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:24:56 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:24:56 np0005548915 podman[294191]: 2025-12-06 10:24:56.092575778 +0000 UTC m=+0.208551001 container remove 8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_austin, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Dec  6 05:24:56 np0005548915 systemd[1]: libpod-conmon-8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727.scope: Deactivated successfully.
Dec  6 05:24:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1316: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:24:56 np0005548915 podman[294232]: 2025-12-06 10:24:56.324974377 +0000 UTC m=+0.063944522 container create 24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  6 05:24:56 np0005548915 systemd[1]: Started libpod-conmon-24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58.scope.
Dec  6 05:24:56 np0005548915 podman[294232]: 2025-12-06 10:24:56.28687398 +0000 UTC m=+0.025844175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:24:56 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:24:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e97329a1c674852b364c716cf0c507e21b7edc1a24ed66abb1399b85a651e9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:24:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e97329a1c674852b364c716cf0c507e21b7edc1a24ed66abb1399b85a651e9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:24:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e97329a1c674852b364c716cf0c507e21b7edc1a24ed66abb1399b85a651e9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:24:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e97329a1c674852b364c716cf0c507e21b7edc1a24ed66abb1399b85a651e9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:24:56 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e97329a1c674852b364c716cf0c507e21b7edc1a24ed66abb1399b85a651e9a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:24:56 np0005548915 podman[294232]: 2025-12-06 10:24:56.439091826 +0000 UTC m=+0.178061961 container init 24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 05:24:56 np0005548915 podman[294232]: 2025-12-06 10:24:56.451835003 +0000 UTC m=+0.190805108 container start 24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cannon, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Dec  6 05:24:56 np0005548915 podman[294232]: 2025-12-06 10:24:56.455205725 +0000 UTC m=+0.194175930 container attach 24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cannon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:24:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:56.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:56.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:56 np0005548915 kind_cannon[294249]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:24:56 np0005548915 kind_cannon[294249]: --> All data devices are unavailable
Dec  6 05:24:56 np0005548915 systemd[1]: libpod-24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58.scope: Deactivated successfully.
Dec  6 05:24:56 np0005548915 podman[294232]: 2025-12-06 10:24:56.903601697 +0000 UTC m=+0.642571882 container died 24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 05:24:56 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6e97329a1c674852b364c716cf0c507e21b7edc1a24ed66abb1399b85a651e9a-merged.mount: Deactivated successfully.
Dec  6 05:24:56 np0005548915 podman[294232]: 2025-12-06 10:24:56.957084273 +0000 UTC m=+0.696054388 container remove 24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:24:56 np0005548915 systemd[1]: libpod-conmon-24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58.scope: Deactivated successfully.
Dec  6 05:24:57 np0005548915 nova_compute[254819]: 2025-12-06 10:24:57.566 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:24:57 np0005548915 podman[294368]: 2025-12-06 10:24:57.657326094 +0000 UTC m=+0.072498575 container create c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  6 05:24:57 np0005548915 systemd[1]: Started libpod-conmon-c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8.scope.
Dec  6 05:24:57 np0005548915 podman[294368]: 2025-12-06 10:24:57.628662304 +0000 UTC m=+0.043834835 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:24:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:57.737Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:24:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:57.738Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:24:57 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:24:57 np0005548915 podman[294368]: 2025-12-06 10:24:57.766090467 +0000 UTC m=+0.181262938 container init c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Dec  6 05:24:57 np0005548915 podman[294368]: 2025-12-06 10:24:57.779683457 +0000 UTC m=+0.194855898 container start c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Dec  6 05:24:57 np0005548915 blissful_robinson[294407]: 167 167
Dec  6 05:24:57 np0005548915 systemd[1]: libpod-c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8.scope: Deactivated successfully.
Dec  6 05:24:57 np0005548915 podman[294368]: 2025-12-06 10:24:57.934185125 +0000 UTC m=+0.349357616 container attach c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:24:57 np0005548915 podman[294368]: 2025-12-06 10:24:57.935370977 +0000 UTC m=+0.350543468 container died c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:24:57 np0005548915 systemd[1]: var-lib-containers-storage-overlay-4e05bbeb31b67b3050e67cbbd47e5fcc88faf365476d6d76aabc0959b2479c63-merged.mount: Deactivated successfully.
Dec  6 05:24:57 np0005548915 podman[294368]: 2025-12-06 10:24:57.98835237 +0000 UTC m=+0.403524851 container remove c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:24:57 np0005548915 systemd[1]: libpod-conmon-c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8.scope: Deactivated successfully.
Dec  6 05:24:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1317: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:24:58 np0005548915 podman[294432]: 2025-12-06 10:24:58.18036411 +0000 UTC m=+0.057824966 container create 5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  6 05:24:58 np0005548915 systemd[1]: Started libpod-conmon-5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801.scope.
Dec  6 05:24:58 np0005548915 podman[294432]: 2025-12-06 10:24:58.153247181 +0000 UTC m=+0.030708117 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:24:58 np0005548915 nova_compute[254819]: 2025-12-06 10:24:58.252 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:24:58 np0005548915 nova_compute[254819]: 2025-12-06 10:24:58.253 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:24:58 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:24:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836bc34eebdf9759a7937a64b8401c180d97caf725b94f3e535998cf0eaf9d23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:24:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836bc34eebdf9759a7937a64b8401c180d97caf725b94f3e535998cf0eaf9d23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:24:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836bc34eebdf9759a7937a64b8401c180d97caf725b94f3e535998cf0eaf9d23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:24:58 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836bc34eebdf9759a7937a64b8401c180d97caf725b94f3e535998cf0eaf9d23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:24:58 np0005548915 podman[294432]: 2025-12-06 10:24:58.307139382 +0000 UTC m=+0.184600308 container init 5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_agnesi, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:24:58 np0005548915 podman[294432]: 2025-12-06 10:24:58.314958275 +0000 UTC m=+0.192419131 container start 5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_agnesi, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  6 05:24:58 np0005548915 podman[294432]: 2025-12-06 10:24:58.319039096 +0000 UTC m=+0.196499992 container attach 5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_agnesi, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  6 05:24:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:24:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:58.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]: {
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:    "1": [
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:        {
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:            "devices": [
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:                "/dev/loop3"
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:            ],
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:            "lv_name": "ceph_lv0",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:            "lv_size": "21470642176",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:            "name": "ceph_lv0",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:            "tags": {
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:                "ceph.cluster_name": "ceph",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:                "ceph.crush_device_class": "",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:                "ceph.encrypted": "0",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:                "ceph.osd_id": "1",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:                "ceph.type": "block",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:                "ceph.vdo": "0",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:                "ceph.with_tpm": "0"
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:            },
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:            "type": "block",
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:            "vg_name": "ceph_vg0"
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:        }
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]:    ]
Dec  6 05:24:58 np0005548915 blissful_agnesi[294448]: }
Dec  6 05:24:58 np0005548915 systemd[1]: libpod-5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801.scope: Deactivated successfully.
Dec  6 05:24:58 np0005548915 conmon[294448]: conmon 5b5469f23b54a89be8dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801.scope/container/memory.events
Dec  6 05:24:58 np0005548915 podman[294432]: 2025-12-06 10:24:58.684026696 +0000 UTC m=+0.561487592 container died 5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 05:24:58 np0005548915 systemd[1]: var-lib-containers-storage-overlay-836bc34eebdf9759a7937a64b8401c180d97caf725b94f3e535998cf0eaf9d23-merged.mount: Deactivated successfully.
Dec  6 05:24:58 np0005548915 podman[294432]: 2025-12-06 10:24:58.73334047 +0000 UTC m=+0.610801326 container remove 5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_agnesi, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  6 05:24:58 np0005548915 systemd[1]: libpod-conmon-5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801.scope: Deactivated successfully.
Dec  6 05:24:58 np0005548915 nova_compute[254819]: 2025-12-06 10:24:58.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:24:58 np0005548915 podman[294459]: 2025-12-06 10:24:58.8064114 +0000 UTC m=+0.076242038 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 05:24:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:24:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:24:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:58.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:24:59 np0005548915 podman[294582]: 2025-12-06 10:24:59.381990826 +0000 UTC m=+0.050152156 container create d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:24:59 np0005548915 systemd[1]: Started libpod-conmon-d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1.scope.
Dec  6 05:24:59 np0005548915 podman[294582]: 2025-12-06 10:24:59.361702843 +0000 UTC m=+0.029864213 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:24:59 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:24:59 np0005548915 podman[294582]: 2025-12-06 10:24:59.477080715 +0000 UTC m=+0.145242075 container init d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:24:59 np0005548915 podman[294582]: 2025-12-06 10:24:59.485872516 +0000 UTC m=+0.154033846 container start d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  6 05:24:59 np0005548915 podman[294582]: 2025-12-06 10:24:59.489128924 +0000 UTC m=+0.157290444 container attach d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:24:59 np0005548915 busy_jennings[294598]: 167 167
Dec  6 05:24:59 np0005548915 systemd[1]: libpod-d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1.scope: Deactivated successfully.
Dec  6 05:24:59 np0005548915 podman[294582]: 2025-12-06 10:24:59.493226656 +0000 UTC m=+0.161387986 container died d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 05:24:59 np0005548915 systemd[1]: var-lib-containers-storage-overlay-b0e9b617b46a06c85415f6d931d952aba099e235072375fa63f06ad1c2ca3c18-merged.mount: Deactivated successfully.
Dec  6 05:24:59 np0005548915 podman[294582]: 2025-12-06 10:24:59.5275412 +0000 UTC m=+0.195702530 container remove d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  6 05:24:59 np0005548915 systemd[1]: libpod-conmon-d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1.scope: Deactivated successfully.
Dec  6 05:24:59 np0005548915 nova_compute[254819]: 2025-12-06 10:24:59.743 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:24:59 np0005548915 podman[294622]: 2025-12-06 10:24:59.708677944 +0000 UTC m=+0.040256007 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:24:59 np0005548915 podman[294622]: 2025-12-06 10:24:59.899804479 +0000 UTC m=+0.231382522 container create 1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:24:59 np0005548915 systemd[1]: Started libpod-conmon-1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7.scope.
Dec  6 05:24:59 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:24:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd86e88c7f7be620c8d397747c97d66ad4fcadb97ae1763d2444d55419a6198b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:24:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd86e88c7f7be620c8d397747c97d66ad4fcadb97ae1763d2444d55419a6198b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:24:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd86e88c7f7be620c8d397747c97d66ad4fcadb97ae1763d2444d55419a6198b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:24:59 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd86e88c7f7be620c8d397747c97d66ad4fcadb97ae1763d2444d55419a6198b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:24:59 np0005548915 podman[294622]: 2025-12-06 10:24:59.994078547 +0000 UTC m=+0.325656610 container init 1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Dec  6 05:25:00 np0005548915 podman[294622]: 2025-12-06 10:25:00.000672676 +0000 UTC m=+0.332250729 container start 1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 05:25:00 np0005548915 podman[294622]: 2025-12-06 10:25:00.005400646 +0000 UTC m=+0.336978709 container attach 1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Dec  6 05:25:00 np0005548915 nova_compute[254819]: 2025-12-06 10:25:00.068 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:25:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1318: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:25:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:00.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:25:00 np0005548915 lvm[294716]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:25:00 np0005548915 lvm[294716]: VG ceph_vg0 finished
Dec  6 05:25:00 np0005548915 wizardly_neumann[294640]: {}
Dec  6 05:25:00 np0005548915 nova_compute[254819]: 2025-12-06 10:25:00.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:25:00 np0005548915 nova_compute[254819]: 2025-12-06 10:25:00.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:25:00 np0005548915 nova_compute[254819]: 2025-12-06 10:25:00.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:25:00 np0005548915 nova_compute[254819]: 2025-12-06 10:25:00.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:25:00 np0005548915 systemd[1]: libpod-1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7.scope: Deactivated successfully.
Dec  6 05:25:00 np0005548915 podman[294622]: 2025-12-06 10:25:00.761601471 +0000 UTC m=+1.093179534 container died 1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_neumann, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:25:00 np0005548915 systemd[1]: libpod-1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7.scope: Consumed 1.261s CPU time.
Dec  6 05:25:00 np0005548915 nova_compute[254819]: 2025-12-06 10:25:00.770 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 05:25:00 np0005548915 nova_compute[254819]: 2025-12-06 10:25:00.770 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:25:00 np0005548915 nova_compute[254819]: 2025-12-06 10:25:00.771 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:25:00 np0005548915 systemd[1]: var-lib-containers-storage-overlay-dd86e88c7f7be620c8d397747c97d66ad4fcadb97ae1763d2444d55419a6198b-merged.mount: Deactivated successfully.
Dec  6 05:25:00 np0005548915 podman[294622]: 2025-12-06 10:25:00.809410203 +0000 UTC m=+1.140988246 container remove 1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:25:00 np0005548915 systemd[1]: libpod-conmon-1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7.scope: Deactivated successfully.
Dec  6 05:25:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:00.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:25:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:25:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:25:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:25:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:25:00 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:25:01 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:25:01 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:25:01 np0005548915 nova_compute[254819]: 2025-12-06 10:25:01.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:25:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1319: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:02.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:02 np0005548915 nova_compute[254819]: 2025-12-06 10:25:02.568 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:02.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1320: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:25:04 np0005548915 podman[294762]: 2025-12-06 10:25:04.495544488 +0000 UTC m=+0.107781617 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  6 05:25:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:04.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:04.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:05 np0005548915 nova_compute[254819]: 2025-12-06 10:25:05.072 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:25:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1321: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:06.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:06.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:07 np0005548915 nova_compute[254819]: 2025-12-06 10:25:07.569 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:25:07.738Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:25:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1322: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:25:08 np0005548915 podman[294793]: 2025-12-06 10:25:08.445268492 +0000 UTC m=+0.067938391 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  6 05:25:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:08.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:25:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:08.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:25:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:25:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:25:10 np0005548915 nova_compute[254819]: 2025-12-06 10:25:10.075 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:25:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1323: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:25:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:10.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:25:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:10.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:10] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:25:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:10] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:25:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1324: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:12.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:12 np0005548915 nova_compute[254819]: 2025-12-06 10:25:12.572 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:25:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:12.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:25:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1325: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:25:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:14.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:14.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:25:15 np0005548915 nova_compute[254819]: 2025-12-06 10:25:15.105 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1326: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:25:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:16.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:25:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:16.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:17 np0005548915 nova_compute[254819]: 2025-12-06 10:25:17.573 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:25:17.740Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:25:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1327: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:25:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:18.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:18.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:25:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1328: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:20 np0005548915 nova_compute[254819]: 2025-12-06 10:25:20.153 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:20.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:20.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:25:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:25:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1329: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:22.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:22 np0005548915 nova_compute[254819]: 2025-12-06 10:25:22.574 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:22.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:25:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:25:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:25:23
Dec  6 05:25:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:25:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:25:23 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['vms', '.nfs', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'images', 'volumes', '.rgw.root']
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1330: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 766 B/s rd, 0 op/s
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:25:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:25:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:24.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:25:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:25:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:25:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:24.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:25:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:25:25 np0005548915 nova_compute[254819]: 2025-12-06 10:25:25.158 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1331: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:26.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:25:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:26.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:25:27 np0005548915 nova_compute[254819]: 2025-12-06 10:25:27.576 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:25:27.742Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:25:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1332: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 766 B/s rd, 0 op/s
Dec  6 05:25:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:28.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:28.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:29 np0005548915 podman[294858]: 2025-12-06 10:25:29.4563517 +0000 UTC m=+0.079980510 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true)
Dec  6 05:25:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:25:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1333: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:30 np0005548915 nova_compute[254819]: 2025-12-06 10:25:30.163 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:25:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:30.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:25:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:25:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:30.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:25:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:25:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  6 05:25:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1334: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:32.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:32 np0005548915 nova_compute[254819]: 2025-12-06 10:25:32.577 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:32.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1335: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:25:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:34.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:34.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:25:35 np0005548915 nova_compute[254819]: 2025-12-06 10:25:35.166 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:35 np0005548915 podman[294884]: 2025-12-06 10:25:35.496785892 +0000 UTC m=+0.123692649 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 05:25:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1336: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:36.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:36.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:37 np0005548915 nova_compute[254819]: 2025-12-06 10:25:37.578 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:25:37.744Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:25:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1337: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:25:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:38.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:38.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:25:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:25:39 np0005548915 podman[294940]: 2025-12-06 10:25:39.430080778 +0000 UTC m=+0.064058415 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  6 05:25:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:25:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1338: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:40 np0005548915 nova_compute[254819]: 2025-12-06 10:25:40.171 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:40.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:40.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:40] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:25:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:40] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:25:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1339: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:42.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:42 np0005548915 nova_compute[254819]: 2025-12-06 10:25:42.583 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:25:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:42.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:25:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1340: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:25:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:25:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:44.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:25:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:44.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:25:45 np0005548915 nova_compute[254819]: 2025-12-06 10:25:45.175 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  6 05:25:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2814903871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 05:25:46 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  6 05:25:46 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2814903871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 05:25:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1341: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:46.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:46.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:47 np0005548915 nova_compute[254819]: 2025-12-06 10:25:47.582 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:25:47.744Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:25:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1342: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:25:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:48.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:48.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:25:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1343: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:50 np0005548915 nova_compute[254819]: 2025-12-06 10:25:50.178 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:25:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:50.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:25:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:50] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:25:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:50] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:25:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:50.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1344: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:25:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:52.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:25:52 np0005548915 nova_compute[254819]: 2025-12-06 10:25:52.583 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:52 np0005548915 nova_compute[254819]: 2025-12-06 10:25:52.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:25:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:52.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:53 np0005548915 nova_compute[254819]: 2025-12-06 10:25:53.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:25:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:25:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:25:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:25:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:25:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:25:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:25:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:25:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:25:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1345: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:25:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:25:54.252 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:25:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:25:54.252 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:25:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:25:54.252 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:25:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:54.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:25:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:54.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:25:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:25:55 np0005548915 nova_compute[254819]: 2025-12-06 10:25:55.120 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:25:55 np0005548915 nova_compute[254819]: 2025-12-06 10:25:55.120 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:25:55 np0005548915 nova_compute[254819]: 2025-12-06 10:25:55.121 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:25:55 np0005548915 nova_compute[254819]: 2025-12-06 10:25:55.121 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:25:55 np0005548915 nova_compute[254819]: 2025-12-06 10:25:55.122 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:25:55 np0005548915 nova_compute[254819]: 2025-12-06 10:25:55.182 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:25:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3470316468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:25:55 np0005548915 nova_compute[254819]: 2025-12-06 10:25:55.602 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:25:55 np0005548915 nova_compute[254819]: 2025-12-06 10:25:55.760 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:25:55 np0005548915 nova_compute[254819]: 2025-12-06 10:25:55.761 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4505MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:25:55 np0005548915 nova_compute[254819]: 2025-12-06 10:25:55.761 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:25:55 np0005548915 nova_compute[254819]: 2025-12-06 10:25:55.761 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:25:55 np0005548915 nova_compute[254819]: 2025-12-06 10:25:55.904 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:25:55 np0005548915 nova_compute[254819]: 2025-12-06 10:25:55.904 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:25:55 np0005548915 nova_compute[254819]: 2025-12-06 10:25:55.919 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing inventories for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 05:25:56 np0005548915 nova_compute[254819]: 2025-12-06 10:25:56.068 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating ProviderTree inventory for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 05:25:56 np0005548915 nova_compute[254819]: 2025-12-06 10:25:56.068 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating inventory in ProviderTree for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 05:25:56 np0005548915 nova_compute[254819]: 2025-12-06 10:25:56.099 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing aggregate associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 05:25:56 np0005548915 nova_compute[254819]: 2025-12-06 10:25:56.122 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing trait associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 05:25:56 np0005548915 nova_compute[254819]: 2025-12-06 10:25:56.137 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:25:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1346: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:25:56 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:25:56 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/261979015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:25:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:56.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:56 np0005548915 nova_compute[254819]: 2025-12-06 10:25:56.578 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:25:56 np0005548915 nova_compute[254819]: 2025-12-06 10:25:56.583 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:25:56 np0005548915 nova_compute[254819]: 2025-12-06 10:25:56.599 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:25:56 np0005548915 nova_compute[254819]: 2025-12-06 10:25:56.600 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:25:56 np0005548915 nova_compute[254819]: 2025-12-06 10:25:56.600 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:25:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:25:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:56.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:25:57 np0005548915 nova_compute[254819]: 2025-12-06 10:25:57.585 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:25:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:25:57.745Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:25:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1347: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:25:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:58.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:25:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:25:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:58.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:25:59 np0005548915 nova_compute[254819]: 2025-12-06 10:25:59.601 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:25:59 np0005548915 nova_compute[254819]: 2025-12-06 10:25:59.602 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:25:59 np0005548915 nova_compute[254819]: 2025-12-06 10:25:59.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:26:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:26:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1348: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:26:00 np0005548915 nova_compute[254819]: 2025-12-06 10:26:00.185 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:00 np0005548915 podman[295053]: 2025-12-06 10:26:00.447866426 +0000 UTC m=+0.076195687 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  6 05:26:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:00.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:00 np0005548915 nova_compute[254819]: 2025-12-06 10:26:00.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:26:00 np0005548915 nova_compute[254819]: 2025-12-06 10:26:00.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:26:00 np0005548915 nova_compute[254819]: 2025-12-06 10:26:00.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:26:00 np0005548915 nova_compute[254819]: 2025-12-06 10:26:00.768 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 05:26:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:26:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:26:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:00.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:01 np0005548915 nova_compute[254819]: 2025-12-06 10:26:01.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:26:01 np0005548915 nova_compute[254819]: 2025-12-06 10:26:01.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:26:01 np0005548915 nova_compute[254819]: 2025-12-06 10:26:01.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:26:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1349: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:26:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:02.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:02 np0005548915 nova_compute[254819]: 2025-12-06 10:26:02.587 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:02 np0005548915 nova_compute[254819]: 2025-12-06 10:26:02.761 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:26:02 np0005548915 nova_compute[254819]: 2025-12-06 10:26:02.761 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:26:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:02.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  6 05:26:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  6 05:26:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  6 05:26:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  6 05:26:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:03 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  6 05:26:03 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 05:26:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1350: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:26:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1351: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:26:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:04.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:04 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec  6 05:26:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:04.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:05 np0005548915 podman[295251]: 2025-12-06 10:26:05.062631843 +0000 UTC m=+0.055447801 container create f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:26:05 np0005548915 systemd[1]: Started libpod-conmon-f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6.scope.
Dec  6 05:26:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:26:05 np0005548915 podman[295251]: 2025-12-06 10:26:05.035038151 +0000 UTC m=+0.027854119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:26:05 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:26:05 np0005548915 podman[295251]: 2025-12-06 10:26:05.168192668 +0000 UTC m=+0.161008606 container init f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mendel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:26:05 np0005548915 podman[295251]: 2025-12-06 10:26:05.179117876 +0000 UTC m=+0.171933814 container start f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mendel, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:26:05 np0005548915 podman[295251]: 2025-12-06 10:26:05.183507915 +0000 UTC m=+0.176323853 container attach f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mendel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  6 05:26:05 np0005548915 confident_mendel[295267]: 167 167
Dec  6 05:26:05 np0005548915 nova_compute[254819]: 2025-12-06 10:26:05.188 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:05 np0005548915 systemd[1]: libpod-f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6.scope: Deactivated successfully.
Dec  6 05:26:05 np0005548915 podman[295251]: 2025-12-06 10:26:05.190385623 +0000 UTC m=+0.183201551 container died f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mendel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 05:26:05 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 05:26:05 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:26:05 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:05 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:05 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:26:05 np0005548915 ceph-mon[74327]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec  6 05:26:05 np0005548915 systemd[1]: var-lib-containers-storage-overlay-84c2c8a7d002f95aad81858764b60ed7acd0af1f4cba04b2bad552965cc18309-merged.mount: Deactivated successfully.
Dec  6 05:26:05 np0005548915 podman[295251]: 2025-12-06 10:26:05.252286749 +0000 UTC m=+0.245102707 container remove f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mendel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 05:26:05 np0005548915 systemd[1]: libpod-conmon-f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6.scope: Deactivated successfully.
Dec  6 05:26:05 np0005548915 podman[295293]: 2025-12-06 10:26:05.414090366 +0000 UTC m=+0.047530206 container create 3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lichterman, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:26:05 np0005548915 systemd[1]: Started libpod-conmon-3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5.scope.
Dec  6 05:26:05 np0005548915 podman[295293]: 2025-12-06 10:26:05.394847491 +0000 UTC m=+0.028287351 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:26:05 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:26:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c0b4438dac3664ccbb87c21f9f2be8c6f8dc5e334b29f336421cc721e68dd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:26:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c0b4438dac3664ccbb87c21f9f2be8c6f8dc5e334b29f336421cc721e68dd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:26:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c0b4438dac3664ccbb87c21f9f2be8c6f8dc5e334b29f336421cc721e68dd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:26:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c0b4438dac3664ccbb87c21f9f2be8c6f8dc5e334b29f336421cc721e68dd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:26:05 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c0b4438dac3664ccbb87c21f9f2be8c6f8dc5e334b29f336421cc721e68dd4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:26:05 np0005548915 podman[295293]: 2025-12-06 10:26:05.534741882 +0000 UTC m=+0.168181722 container init 3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lichterman, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:26:05 np0005548915 podman[295293]: 2025-12-06 10:26:05.547374405 +0000 UTC m=+0.180814245 container start 3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 05:26:05 np0005548915 podman[295293]: 2025-12-06 10:26:05.551065616 +0000 UTC m=+0.184505456 container attach 3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lichterman, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  6 05:26:05 np0005548915 nova_compute[254819]: 2025-12-06 10:26:05.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:26:05 np0005548915 nova_compute[254819]: 2025-12-06 10:26:05.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 05:26:05 np0005548915 nova_compute[254819]: 2025-12-06 10:26:05.772 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 05:26:05 np0005548915 nova_compute[254819]: 2025-12-06 10:26:05.772 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:26:05 np0005548915 nova_compute[254819]: 2025-12-06 10:26:05.772 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 05:26:05 np0005548915 compassionate_lichterman[295310]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:26:05 np0005548915 compassionate_lichterman[295310]: --> All data devices are unavailable
Dec  6 05:26:05 np0005548915 systemd[1]: libpod-3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5.scope: Deactivated successfully.
Dec  6 05:26:05 np0005548915 podman[295327]: 2025-12-06 10:26:05.939866885 +0000 UTC m=+0.038337255 container died 3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lichterman, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  6 05:26:05 np0005548915 systemd[1]: var-lib-containers-storage-overlay-41c0b4438dac3664ccbb87c21f9f2be8c6f8dc5e334b29f336421cc721e68dd4-merged.mount: Deactivated successfully.
Dec  6 05:26:05 np0005548915 podman[295327]: 2025-12-06 10:26:05.984677405 +0000 UTC m=+0.083147735 container remove 3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:26:05 np0005548915 systemd[1]: libpod-conmon-3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5.scope: Deactivated successfully.
Dec  6 05:26:06 np0005548915 podman[295326]: 2025-12-06 10:26:06.046416437 +0000 UTC m=+0.139664584 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  6 05:26:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1352: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec  6 05:26:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:06.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:06 np0005548915 podman[295458]: 2025-12-06 10:26:06.675739277 +0000 UTC m=+0.043066284 container create c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_margulis, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:26:06 np0005548915 systemd[1]: Started libpod-conmon-c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789.scope.
Dec  6 05:26:06 np0005548915 podman[295458]: 2025-12-06 10:26:06.656503793 +0000 UTC m=+0.023830850 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:26:06 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:26:06 np0005548915 podman[295458]: 2025-12-06 10:26:06.773111219 +0000 UTC m=+0.140438256 container init c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec  6 05:26:06 np0005548915 podman[295458]: 2025-12-06 10:26:06.787302026 +0000 UTC m=+0.154629043 container start c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_margulis, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:26:06 np0005548915 podman[295458]: 2025-12-06 10:26:06.791759636 +0000 UTC m=+0.159086663 container attach c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_margulis, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  6 05:26:06 np0005548915 xenodochial_margulis[295475]: 167 167
Dec  6 05:26:06 np0005548915 systemd[1]: libpod-c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789.scope: Deactivated successfully.
Dec  6 05:26:06 np0005548915 podman[295458]: 2025-12-06 10:26:06.796757593 +0000 UTC m=+0.164084610 container died c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:26:06 np0005548915 systemd[1]: var-lib-containers-storage-overlay-99927a5d9aa45770db80b61c6268ace7b13e3a4c24c94894c48f99e6ad3a785c-merged.mount: Deactivated successfully.
Dec  6 05:26:06 np0005548915 podman[295458]: 2025-12-06 10:26:06.838620833 +0000 UTC m=+0.205947850 container remove c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_margulis, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  6 05:26:06 np0005548915 systemd[1]: libpod-conmon-c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789.scope: Deactivated successfully.
Dec  6 05:26:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:06.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:07 np0005548915 podman[295498]: 2025-12-06 10:26:07.058945344 +0000 UTC m=+0.068912618 container create 561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wu, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  6 05:26:07 np0005548915 systemd[1]: Started libpod-conmon-561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65.scope.
Dec  6 05:26:07 np0005548915 podman[295498]: 2025-12-06 10:26:07.026851659 +0000 UTC m=+0.036819003 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:26:07 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:26:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c84f5fa51beb3c7314b14ded408460416d4e0eace3654ed6bf6b24adf8f13c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:26:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c84f5fa51beb3c7314b14ded408460416d4e0eace3654ed6bf6b24adf8f13c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:26:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c84f5fa51beb3c7314b14ded408460416d4e0eace3654ed6bf6b24adf8f13c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:26:07 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c84f5fa51beb3c7314b14ded408460416d4e0eace3654ed6bf6b24adf8f13c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:26:07 np0005548915 podman[295498]: 2025-12-06 10:26:07.169453683 +0000 UTC m=+0.179420967 container init 561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wu, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  6 05:26:07 np0005548915 podman[295498]: 2025-12-06 10:26:07.183118336 +0000 UTC m=+0.193085610 container start 561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:26:07 np0005548915 podman[295498]: 2025-12-06 10:26:07.187891286 +0000 UTC m=+0.197858610 container attach 561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:26:07 np0005548915 bold_wu[295515]: {
Dec  6 05:26:07 np0005548915 bold_wu[295515]:    "1": [
Dec  6 05:26:07 np0005548915 bold_wu[295515]:        {
Dec  6 05:26:07 np0005548915 bold_wu[295515]:            "devices": [
Dec  6 05:26:07 np0005548915 bold_wu[295515]:                "/dev/loop3"
Dec  6 05:26:07 np0005548915 bold_wu[295515]:            ],
Dec  6 05:26:07 np0005548915 bold_wu[295515]:            "lv_name": "ceph_lv0",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:            "lv_size": "21470642176",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:            "name": "ceph_lv0",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:            "tags": {
Dec  6 05:26:07 np0005548915 bold_wu[295515]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:                "ceph.cluster_name": "ceph",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:                "ceph.crush_device_class": "",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:                "ceph.encrypted": "0",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:                "ceph.osd_id": "1",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:                "ceph.type": "block",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:                "ceph.vdo": "0",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:                "ceph.with_tpm": "0"
Dec  6 05:26:07 np0005548915 bold_wu[295515]:            },
Dec  6 05:26:07 np0005548915 bold_wu[295515]:            "type": "block",
Dec  6 05:26:07 np0005548915 bold_wu[295515]:            "vg_name": "ceph_vg0"
Dec  6 05:26:07 np0005548915 bold_wu[295515]:        }
Dec  6 05:26:07 np0005548915 bold_wu[295515]:    ]
Dec  6 05:26:07 np0005548915 bold_wu[295515]: }
Dec  6 05:26:07 np0005548915 systemd[1]: libpod-561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65.scope: Deactivated successfully.
Dec  6 05:26:07 np0005548915 podman[295498]: 2025-12-06 10:26:07.497533199 +0000 UTC m=+0.507500463 container died 561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  6 05:26:07 np0005548915 systemd[1]: var-lib-containers-storage-overlay-40c84f5fa51beb3c7314b14ded408460416d4e0eace3654ed6bf6b24adf8f13c-merged.mount: Deactivated successfully.
Dec  6 05:26:07 np0005548915 podman[295498]: 2025-12-06 10:26:07.542583257 +0000 UTC m=+0.552550491 container remove 561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wu, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Dec  6 05:26:07 np0005548915 systemd[1]: libpod-conmon-561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65.scope: Deactivated successfully.
Dec  6 05:26:07 np0005548915 nova_compute[254819]: 2025-12-06 10:26:07.589 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:26:07.746Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:26:08 np0005548915 podman[295629]: 2025-12-06 10:26:08.204452523 +0000 UTC m=+0.045828020 container create 2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_haibt, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  6 05:26:08 np0005548915 systemd[1]: Started libpod-conmon-2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a.scope.
Dec  6 05:26:08 np0005548915 podman[295629]: 2025-12-06 10:26:08.182647509 +0000 UTC m=+0.024023026 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:26:08 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:26:08 np0005548915 podman[295629]: 2025-12-06 10:26:08.294040083 +0000 UTC m=+0.135415600 container init 2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:26:08 np0005548915 podman[295629]: 2025-12-06 10:26:08.299634265 +0000 UTC m=+0.141009762 container start 2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 05:26:08 np0005548915 podman[295629]: 2025-12-06 10:26:08.302845072 +0000 UTC m=+0.144220589 container attach 2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_haibt, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  6 05:26:08 np0005548915 inspiring_haibt[295645]: 167 167
Dec  6 05:26:08 np0005548915 systemd[1]: libpod-2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a.scope: Deactivated successfully.
Dec  6 05:26:08 np0005548915 podman[295629]: 2025-12-06 10:26:08.30531371 +0000 UTC m=+0.146689217 container died 2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_haibt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:26:08 np0005548915 systemd[1]: var-lib-containers-storage-overlay-e18611d23e644d3f0126a08d9f17ce709d74f48246331ebf00c2a7f3841f1832-merged.mount: Deactivated successfully.
Dec  6 05:26:08 np0005548915 podman[295629]: 2025-12-06 10:26:08.344440685 +0000 UTC m=+0.185816192 container remove 2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  6 05:26:08 np0005548915 systemd[1]: libpod-conmon-2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a.scope: Deactivated successfully.
Dec  6 05:26:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1353: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec  6 05:26:08 np0005548915 podman[295670]: 2025-12-06 10:26:08.537100183 +0000 UTC m=+0.049774437 container create da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Dec  6 05:26:08 np0005548915 systemd[1]: Started libpod-conmon-da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce.scope.
Dec  6 05:26:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:08.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:08 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:26:08 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cb8a667a50facf5c70eb077f72610352c08025e6f454db51ab74db3a4a6816/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:26:08 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cb8a667a50facf5c70eb077f72610352c08025e6f454db51ab74db3a4a6816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:26:08 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cb8a667a50facf5c70eb077f72610352c08025e6f454db51ab74db3a4a6816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:26:08 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cb8a667a50facf5c70eb077f72610352c08025e6f454db51ab74db3a4a6816/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:26:08 np0005548915 podman[295670]: 2025-12-06 10:26:08.517787657 +0000 UTC m=+0.030461961 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:26:08 np0005548915 podman[295670]: 2025-12-06 10:26:08.621071599 +0000 UTC m=+0.133745873 container init da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Dec  6 05:26:08 np0005548915 podman[295670]: 2025-12-06 10:26:08.628688816 +0000 UTC m=+0.141363070 container start da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  6 05:26:08 np0005548915 podman[295670]: 2025-12-06 10:26:08.631619176 +0000 UTC m=+0.144293460 container attach da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  6 05:26:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:26:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:08.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:26:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec  6 05:26:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:26:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:26:09 np0005548915 lvm[295760]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:26:09 np0005548915 lvm[295760]: VG ceph_vg0 finished
Dec  6 05:26:09 np0005548915 romantic_rubin[295685]: {}
Dec  6 05:26:09 np0005548915 systemd[1]: libpod-da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce.scope: Deactivated successfully.
Dec  6 05:26:09 np0005548915 podman[295670]: 2025-12-06 10:26:09.430021151 +0000 UTC m=+0.942695405 container died da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:26:09 np0005548915 systemd[1]: libpod-da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce.scope: Consumed 1.252s CPU time.
Dec  6 05:26:09 np0005548915 systemd[1]: var-lib-containers-storage-overlay-11cb8a667a50facf5c70eb077f72610352c08025e6f454db51ab74db3a4a6816-merged.mount: Deactivated successfully.
Dec  6 05:26:09 np0005548915 podman[295670]: 2025-12-06 10:26:09.484808874 +0000 UTC m=+0.997483138 container remove da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_rubin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  6 05:26:09 np0005548915 systemd[1]: libpod-conmon-da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce.scope: Deactivated successfully.
Dec  6 05:26:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:26:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:09 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:26:09 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:09 np0005548915 podman[295775]: 2025-12-06 10:26:09.589022732 +0000 UTC m=+0.085149480 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  6 05:26:10 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:10 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:10 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:26:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:26:10 np0005548915 nova_compute[254819]: 2025-12-06 10:26:10.191 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1354: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec  6 05:26:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:10.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:10] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  6 05:26:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:10] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  6 05:26:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:10.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1355: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec  6 05:26:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:12.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:12 np0005548915 nova_compute[254819]: 2025-12-06 10:26:12.593 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:12.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1356: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec  6 05:26:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:14.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:14.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:26:15 np0005548915 nova_compute[254819]: 2025-12-06 10:26:15.195 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1357: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:26:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:16.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:16.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:17 np0005548915 nova_compute[254819]: 2025-12-06 10:26:17.596 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:26:17.747Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:26:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1358: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:26:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:18.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:18.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.127368) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016780127438, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1167, "num_deletes": 255, "total_data_size": 1982910, "memory_usage": 2005696, "flush_reason": "Manual Compaction"}
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016780142881, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 1942990, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37031, "largest_seqno": 38197, "table_properties": {"data_size": 1937422, "index_size": 2900, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12264, "raw_average_key_size": 19, "raw_value_size": 1926054, "raw_average_value_size": 3116, "num_data_blocks": 125, "num_entries": 618, "num_filter_entries": 618, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016684, "oldest_key_time": 1765016684, "file_creation_time": 1765016780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 15561 microseconds, and 6165 cpu microseconds.
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.142934) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 1942990 bytes OK
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.142956) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.144838) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.144852) EVENT_LOG_v1 {"time_micros": 1765016780144847, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.144868) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 1977606, prev total WAL file size 1977606, number of live WAL files 2.
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.145520) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(1897KB)], [80(12MB)]
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016780145548, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 15268386, "oldest_snapshot_seqno": -1}
Dec  6 05:26:20 np0005548915 nova_compute[254819]: 2025-12-06 10:26:20.199 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6851 keys, 15099811 bytes, temperature: kUnknown
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016780246004, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 15099811, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15054753, "index_size": 26834, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 180378, "raw_average_key_size": 26, "raw_value_size": 14932111, "raw_average_value_size": 2179, "num_data_blocks": 1056, "num_entries": 6851, "num_filter_entries": 6851, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765016780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.246303) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 15099811 bytes
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.247596) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.8 rd, 150.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.7 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(15.6) write-amplify(7.8) OK, records in: 7380, records dropped: 529 output_compression: NoCompression
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.247617) EVENT_LOG_v1 {"time_micros": 1765016780247608, "job": 46, "event": "compaction_finished", "compaction_time_micros": 100572, "compaction_time_cpu_micros": 27885, "output_level": 6, "num_output_files": 1, "total_output_size": 15099811, "num_input_records": 7380, "num_output_records": 6851, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016780248299, "job": 46, "event": "table_file_deletion", "file_number": 82}
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016780251602, "job": 46, "event": "table_file_deletion", "file_number": 80}
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.145430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.251669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.251674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.251676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.251678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:26:20 np0005548915 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.251680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 05:26:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1359: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:26:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:20.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:20] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  6 05:26:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:20] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  6 05:26:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:20.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1360: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:26:22 np0005548915 nova_compute[254819]: 2025-12-06 10:26:22.596 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:22.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:22.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:26:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:26:24
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['.nfs', 'vms', 'volumes', 'default.rgw.log', 'default.rgw.control', 'images', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups']
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1361: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:26:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:24.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:26:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:26:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:24.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:26:25 np0005548915 nova_compute[254819]: 2025-12-06 10:26:25.203 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1362: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:26:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:26.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:26.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:27 np0005548915 nova_compute[254819]: 2025-12-06 10:26:27.597 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:26:27.748Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:26:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1363: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:26:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:28.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:28.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:26:30 np0005548915 nova_compute[254819]: 2025-12-06 10:26:30.207 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1364: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:26:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:30.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:30] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:26:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:30] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:26:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:30.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:31 np0005548915 podman[295867]: 2025-12-06 10:26:31.46264141 +0000 UTC m=+0.091311138 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 05:26:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1365: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:26:32 np0005548915 nova_compute[254819]: 2025-12-06 10:26:32.599 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:32.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:32.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1366: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:26:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:34.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:34.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:26:35 np0005548915 nova_compute[254819]: 2025-12-06 10:26:35.209 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1367: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:26:36 np0005548915 podman[295893]: 2025-12-06 10:26:36.451777591 +0000 UTC m=+0.085090888 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  6 05:26:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:36.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:36.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:37 np0005548915 nova_compute[254819]: 2025-12-06 10:26:37.602 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:26:37.748Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:26:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:26:37.749Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:26:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1368: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:26:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:38.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:38.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:26:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:26:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:26:40 np0005548915 nova_compute[254819]: 2025-12-06 10:26:40.235 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1369: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:26:40 np0005548915 podman[295949]: 2025-12-06 10:26:40.412331379 +0000 UTC m=+0.044628507 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec  6 05:26:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:40.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:40] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  6 05:26:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:40] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  6 05:26:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:40.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:42 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1370: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:26:42 np0005548915 nova_compute[254819]: 2025-12-06 10:26:42.601 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:26:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:42.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:26:42 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:42 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:42 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:42.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:44 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1371: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:26:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:44.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:44 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:44 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:44 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:44.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:45 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:26:45 np0005548915 nova_compute[254819]: 2025-12-06 10:26:45.236 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:46 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1372: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:26:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:46.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:46 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:46 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:46 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:46.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:47 np0005548915 nova_compute[254819]: 2025-12-06 10:26:47.603 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:47 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:26:47.750Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:26:48 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1373: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:26:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:48.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:48 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:48 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:48 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:48.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:50 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:26:50 np0005548915 nova_compute[254819]: 2025-12-06 10:26:50.239 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:50 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1374: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:26:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:50.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:50 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:50] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  6 05:26:50 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:50] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  6 05:26:50 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:50 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:50 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:50.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:52 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1375: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:26:52 np0005548915 nova_compute[254819]: 2025-12-06 10:26:52.604 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:52.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:52 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:52 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:52 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:52.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:53 np0005548915 nova_compute[254819]: 2025-12-06 10:26:53.788 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:26:53 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:26:53 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:26:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:26:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:26:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:26:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:26:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:26:54 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:26:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:26:54.253 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:26:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:26:54.254 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:26:54 np0005548915 ovn_metadata_agent[162262]: 2025-12-06 10:26:54.254 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:26:54 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1376: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:26:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  6 05:26:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:54.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  6 05:26:54 np0005548915 nova_compute[254819]: 2025-12-06 10:26:54.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:26:54 np0005548915 nova_compute[254819]: 2025-12-06 10:26:54.772 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:26:54 np0005548915 nova_compute[254819]: 2025-12-06 10:26:54.772 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:26:54 np0005548915 nova_compute[254819]: 2025-12-06 10:26:54.773 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:26:54 np0005548915 nova_compute[254819]: 2025-12-06 10:26:54.773 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 05:26:54 np0005548915 nova_compute[254819]: 2025-12-06 10:26:54.773 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:26:54 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:54 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:26:54 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:54.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:26:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:26:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:26:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3570750853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:26:55 np0005548915 nova_compute[254819]: 2025-12-06 10:26:55.195 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:26:55 np0005548915 nova_compute[254819]: 2025-12-06 10:26:55.264 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:55 np0005548915 nova_compute[254819]: 2025-12-06 10:26:55.421 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 05:26:55 np0005548915 nova_compute[254819]: 2025-12-06 10:26:55.422 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4482MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 05:26:55 np0005548915 nova_compute[254819]: 2025-12-06 10:26:55.423 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 05:26:55 np0005548915 nova_compute[254819]: 2025-12-06 10:26:55.423 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 05:26:55 np0005548915 nova_compute[254819]: 2025-12-06 10:26:55.487 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 05:26:55 np0005548915 nova_compute[254819]: 2025-12-06 10:26:55.487 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 05:26:55 np0005548915 nova_compute[254819]: 2025-12-06 10:26:55.508 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 05:26:55 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  6 05:26:55 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3927824637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 05:26:55 np0005548915 nova_compute[254819]: 2025-12-06 10:26:55.978 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 05:26:55 np0005548915 nova_compute[254819]: 2025-12-06 10:26:55.983 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 05:26:55 np0005548915 nova_compute[254819]: 2025-12-06 10:26:55.998 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 05:26:56 np0005548915 nova_compute[254819]: 2025-12-06 10:26:56.000 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 05:26:56 np0005548915 nova_compute[254819]: 2025-12-06 10:26:56.000 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 05:26:56 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1377: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:26:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:56.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:56 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:56 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:26:56 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:56.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:26:57 np0005548915 nova_compute[254819]: 2025-12-06 10:26:57.608 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:26:57 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:26:57.752Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:26:58 np0005548915 nova_compute[254819]: 2025-12-06 10:26:58.000 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:26:58 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1378: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:26:58 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:58 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:58 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:58.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:58 np0005548915 nova_compute[254819]: 2025-12-06 10:26:58.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:26:58 np0005548915 ceph-mgr[74618]: [dashboard INFO request] [192.168.122.100:50986] [POST] [200] [0.002s] [4.0B] [f7bdf29d-61cb-4152-afc5-f0cad61d43d8] /api/prometheus_receiver
Dec  6 05:26:59 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:26:59 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:26:59 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:58.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:26:59 np0005548915 nova_compute[254819]: 2025-12-06 10:26:59.742 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:27:00 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:27:00 np0005548915 nova_compute[254819]: 2025-12-06 10:27:00.268 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:00 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1379: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:27:00 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:00 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:00 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:00.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:00 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:27:00 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:27:01 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:01 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:01 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:01.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:01 np0005548915 nova_compute[254819]: 2025-12-06 10:27:01.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:27:01 np0005548915 nova_compute[254819]: 2025-12-06 10:27:01.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:27:01 np0005548915 nova_compute[254819]: 2025-12-06 10:27:01.748 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 05:27:01 np0005548915 nova_compute[254819]: 2025-12-06 10:27:01.748 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 05:27:01 np0005548915 nova_compute[254819]: 2025-12-06 10:27:01.768 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 05:27:01 np0005548915 nova_compute[254819]: 2025-12-06 10:27:01.768 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:27:02 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1380: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:27:02 np0005548915 podman[296060]: 2025-12-06 10:27:02.473173707 +0000 UTC m=+0.095382929 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 05:27:02 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:02 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:27:02 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:02.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:27:02 np0005548915 nova_compute[254819]: 2025-12-06 10:27:02.646 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:02 np0005548915 nova_compute[254819]: 2025-12-06 10:27:02.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:27:02 np0005548915 nova_compute[254819]: 2025-12-06 10:27:02.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 05:27:03 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:03 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:27:03 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:03.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:27:03 np0005548915 nova_compute[254819]: 2025-12-06 10:27:03.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 05:27:04 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1381: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:27:04 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:04 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:27:04 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:04.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:27:05 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:05 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:05 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:05.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:05 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:27:05 np0005548915 nova_compute[254819]: 2025-12-06 10:27:05.271 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:06 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1382: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:27:06 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:06 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:06 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:06.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:07 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:07 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:27:07 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:07.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:27:07 np0005548915 podman[296085]: 2025-12-06 10:27:07.490398624 +0000 UTC m=+0.119871116 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 05:27:07 np0005548915 nova_compute[254819]: 2025-12-06 10:27:07.647 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:07.753Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:27:07 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:07.753Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:27:08 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1383: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:27:08 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:08 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:08 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:08.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:08 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:08.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:27:08 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:27:08 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:27:09 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:09 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:09 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:09.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:10 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:27:10 np0005548915 nova_compute[254819]: 2025-12-06 10:27:10.274 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:10 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1384: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:27:10 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:10 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:10 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:10.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:10 np0005548915 podman[296236]: 2025-12-06 10:27:10.818725763 +0000 UTC m=+0.370272625 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  6 05:27:10 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:10] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:27:10 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:10] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:27:10 np0005548915 podman[296236]: 2025-12-06 10:27:10.942910235 +0000 UTC m=+0.494457077 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Dec  6 05:27:11 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:11 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:11 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:11.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:11 np0005548915 podman[296271]: 2025-12-06 10:27:11.21736282 +0000 UTC m=+0.061301251 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 05:27:11 np0005548915 podman[296373]: 2025-12-06 10:27:11.611511075 +0000 UTC m=+0.070076439 container exec 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:27:11 np0005548915 podman[296373]: 2025-12-06 10:27:11.649047957 +0000 UTC m=+0.107613281 container exec_died 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:27:12 np0005548915 podman[296508]: 2025-12-06 10:27:12.280848305 +0000 UTC m=+0.071318424 container exec 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 05:27:12 np0005548915 podman[296508]: 2025-12-06 10:27:12.324872164 +0000 UTC m=+0.115342273 container exec_died 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec  6 05:27:12 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1385: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:27:12 np0005548915 podman[296575]: 2025-12-06 10:27:12.609612829 +0000 UTC m=+0.078964772 container exec d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, com.redhat.component=keepalived-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, name=keepalived, release=1793, build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-type=git)
Dec  6 05:27:12 np0005548915 podman[296575]: 2025-12-06 10:27:12.646948496 +0000 UTC m=+0.116300419 container exec_died d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, name=keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, distribution-scope=public, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git)
Dec  6 05:27:12 np0005548915 nova_compute[254819]: 2025-12-06 10:27:12.651 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:12 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:12 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:12 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:12.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:12 np0005548915 podman[296639]: 2025-12-06 10:27:12.93055874 +0000 UTC m=+0.068850366 container exec b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:27:12 np0005548915 podman[296639]: 2025-12-06 10:27:12.963913999 +0000 UTC m=+0.102205585 container exec_died b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:27:13 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:13 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:27:13 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:13.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:27:13 np0005548915 podman[296715]: 2025-12-06 10:27:13.213152337 +0000 UTC m=+0.071453048 container exec fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 05:27:13 np0005548915 podman[296715]: 2025-12-06 10:27:13.412732182 +0000 UTC m=+0.271032823 container exec_died fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  6 05:27:13 np0005548915 podman[296827]: 2025-12-06 10:27:13.826011598 +0000 UTC m=+0.055178153 container exec cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:27:13 np0005548915 podman[296827]: 2025-12-06 10:27:13.870614712 +0000 UTC m=+0.099781267 container exec_died cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  6 05:27:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:27:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:27:13 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:27:13 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:27:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1386: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:27:14 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:14 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:14 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:14.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:27:14 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1387: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 05:27:14 np0005548915 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec  6 05:27:15 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:15 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:15 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:15.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:15 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:27:15 np0005548915 nova_compute[254819]: 2025-12-06 10:27:15.276 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:15 np0005548915 podman[297041]: 2025-12-06 10:27:15.494779518 +0000 UTC m=+0.053542799 container create 8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:27:15 np0005548915 systemd[1]: Started libpod-conmon-8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633.scope.
Dec  6 05:27:15 np0005548915 podman[297041]: 2025-12-06 10:27:15.468005499 +0000 UTC m=+0.026768830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:27:15 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:27:15 np0005548915 podman[297041]: 2025-12-06 10:27:15.593843976 +0000 UTC m=+0.152607287 container init 8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_rhodes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  6 05:27:15 np0005548915 podman[297041]: 2025-12-06 10:27:15.60573158 +0000 UTC m=+0.164494861 container start 8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_rhodes, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  6 05:27:15 np0005548915 podman[297041]: 2025-12-06 10:27:15.609100641 +0000 UTC m=+0.167863952 container attach 8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:27:15 np0005548915 unruffled_rhodes[297057]: 167 167
Dec  6 05:27:15 np0005548915 systemd[1]: libpod-8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633.scope: Deactivated successfully.
Dec  6 05:27:15 np0005548915 podman[297041]: 2025-12-06 10:27:15.616905624 +0000 UTC m=+0.175668915 container died 8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_rhodes, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:27:15 np0005548915 systemd[1]: var-lib-containers-storage-overlay-a10555b416866b8dd8b3d599abb24c5163f6d3d5949a6cd89daca29d0a3cd467-merged.mount: Deactivated successfully.
Dec  6 05:27:15 np0005548915 podman[297041]: 2025-12-06 10:27:15.663370509 +0000 UTC m=+0.222133820 container remove 8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  6 05:27:15 np0005548915 systemd[1]: libpod-conmon-8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633.scope: Deactivated successfully.
Dec  6 05:27:15 np0005548915 podman[297081]: 2025-12-06 10:27:15.869972236 +0000 UTC m=+0.059089160 container create 213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:27:15 np0005548915 systemd[1]: Started libpod-conmon-213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e.scope.
Dec  6 05:27:15 np0005548915 podman[297081]: 2025-12-06 10:27:15.843203058 +0000 UTC m=+0.032319972 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:27:15 np0005548915 ceph-mon[74327]: Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec  6 05:27:15 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:27:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da287891f7c633b1c462bfc8c7e3a8347e298ee93da98fd106adffb1dcbf357/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:27:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da287891f7c633b1c462bfc8c7e3a8347e298ee93da98fd106adffb1dcbf357/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:27:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da287891f7c633b1c462bfc8c7e3a8347e298ee93da98fd106adffb1dcbf357/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:27:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da287891f7c633b1c462bfc8c7e3a8347e298ee93da98fd106adffb1dcbf357/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:27:15 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da287891f7c633b1c462bfc8c7e3a8347e298ee93da98fd106adffb1dcbf357/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 05:27:15 np0005548915 podman[297081]: 2025-12-06 10:27:15.999080483 +0000 UTC m=+0.188197437 container init 213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  6 05:27:16 np0005548915 podman[297081]: 2025-12-06 10:27:16.0121824 +0000 UTC m=+0.201299294 container start 213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hellman, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  6 05:27:16 np0005548915 podman[297081]: 2025-12-06 10:27:16.016045305 +0000 UTC m=+0.205162239 container attach 213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hellman, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  6 05:27:16 np0005548915 systemd-logind[795]: New session 59 of user zuul.
Dec  6 05:27:16 np0005548915 systemd[1]: Started Session 59 of User zuul.
Dec  6 05:27:16 np0005548915 awesome_hellman[297100]: --> passed data devices: 0 physical, 1 LVM
Dec  6 05:27:16 np0005548915 awesome_hellman[297100]: --> All data devices are unavailable
Dec  6 05:27:16 np0005548915 podman[297081]: 2025-12-06 10:27:16.411731001 +0000 UTC m=+0.600847925 container died 213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:27:16 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:16 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:16 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:16.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:16 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1388: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec  6 05:27:17 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:17 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:27:17 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:17.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:27:17 np0005548915 systemd[1]: libpod-213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e.scope: Deactivated successfully.
Dec  6 05:27:17 np0005548915 systemd[1]: var-lib-containers-storage-overlay-6da287891f7c633b1c462bfc8c7e3a8347e298ee93da98fd106adffb1dcbf357-merged.mount: Deactivated successfully.
Dec  6 05:27:17 np0005548915 podman[297081]: 2025-12-06 10:27:17.237417469 +0000 UTC m=+1.426534353 container remove 213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  6 05:27:17 np0005548915 systemd[1]: libpod-conmon-213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e.scope: Deactivated successfully.
Dec  6 05:27:17 np0005548915 nova_compute[254819]: 2025-12-06 10:27:17.688 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:17.754Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:27:17 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:17.756Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:27:17 np0005548915 podman[297318]: 2025-12-06 10:27:17.935508462 +0000 UTC m=+0.050117665 container create e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:27:17 np0005548915 systemd[1]: Started libpod-conmon-e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e.scope.
Dec  6 05:27:18 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:27:18 np0005548915 podman[297318]: 2025-12-06 10:27:17.917151123 +0000 UTC m=+0.031760356 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:27:18 np0005548915 podman[297318]: 2025-12-06 10:27:18.022557743 +0000 UTC m=+0.137166936 container init e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  6 05:27:18 np0005548915 podman[297318]: 2025-12-06 10:27:18.030853499 +0000 UTC m=+0.145462692 container start e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  6 05:27:18 np0005548915 podman[297318]: 2025-12-06 10:27:18.034127219 +0000 UTC m=+0.148736452 container attach e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  6 05:27:18 np0005548915 heuristic_wu[297343]: 167 167
Dec  6 05:27:18 np0005548915 systemd[1]: libpod-e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e.scope: Deactivated successfully.
Dec  6 05:27:18 np0005548915 conmon[297343]: conmon e18a40b6e9a29ac816f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e.scope/container/memory.events
Dec  6 05:27:18 np0005548915 podman[297318]: 2025-12-06 10:27:18.037883001 +0000 UTC m=+0.152492194 container died e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:27:18 np0005548915 systemd[1]: var-lib-containers-storage-overlay-86e6cb91fc3e87dbc66fca98a719e3adf8a2a57eceb010f84846b5609890824f-merged.mount: Deactivated successfully.
Dec  6 05:27:18 np0005548915 podman[297318]: 2025-12-06 10:27:18.074329263 +0000 UTC m=+0.188938456 container remove e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wu, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:27:18 np0005548915 systemd[1]: libpod-conmon-e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e.scope: Deactivated successfully.
Dec  6 05:27:18 np0005548915 podman[297381]: 2025-12-06 10:27:18.248346113 +0000 UTC m=+0.048559424 container create fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 05:27:18 np0005548915 systemd[1]: Started libpod-conmon-fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac.scope.
Dec  6 05:27:18 np0005548915 podman[297381]: 2025-12-06 10:27:18.226057266 +0000 UTC m=+0.026270677 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:27:18 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:27:18 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f042f814e3b5905d6f94ad0a89229d5af24b40bb82a5d42a7649aeb1f5856888/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:27:18 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f042f814e3b5905d6f94ad0a89229d5af24b40bb82a5d42a7649aeb1f5856888/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:27:18 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f042f814e3b5905d6f94ad0a89229d5af24b40bb82a5d42a7649aeb1f5856888/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:27:18 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f042f814e3b5905d6f94ad0a89229d5af24b40bb82a5d42a7649aeb1f5856888/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:27:18 np0005548915 podman[297381]: 2025-12-06 10:27:18.345092498 +0000 UTC m=+0.145305839 container init fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shirley, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 05:27:18 np0005548915 podman[297381]: 2025-12-06 10:27:18.352590132 +0000 UTC m=+0.152803443 container start fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shirley, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  6 05:27:18 np0005548915 podman[297381]: 2025-12-06 10:27:18.357548097 +0000 UTC m=+0.157761408 container attach fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shirley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]: {
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:    "1": [
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:        {
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:            "devices": [
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:                "/dev/loop3"
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:            ],
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:            "lv_name": "ceph_lv0",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:            "lv_size": "21470642176",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:            "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:            "name": "ceph_lv0",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:            "tags": {
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:                "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:                "ceph.cephx_lockbox_secret": "",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:                "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:                "ceph.cluster_name": "ceph",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:                "ceph.crush_device_class": "",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:                "ceph.encrypted": "0",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:                "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:                "ceph.osd_id": "1",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:                "ceph.type": "block",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:                "ceph.vdo": "0",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:                "ceph.with_tpm": "0"
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:            },
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:            "type": "block",
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:            "vg_name": "ceph_vg0"
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:        }
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]:    ]
Dec  6 05:27:18 np0005548915 agitated_shirley[297438]: }
Dec  6 05:27:18 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26756 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:18 np0005548915 systemd[1]: libpod-fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac.scope: Deactivated successfully.
Dec  6 05:27:18 np0005548915 conmon[297438]: conmon fe599612256301cc7e52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac.scope/container/memory.events
Dec  6 05:27:18 np0005548915 podman[297381]: 2025-12-06 10:27:18.661454784 +0000 UTC m=+0.461668125 container died fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  6 05:27:18 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:18 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:18 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:18.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:18 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28039 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:18 np0005548915 systemd[1]: var-lib-containers-storage-overlay-f042f814e3b5905d6f94ad0a89229d5af24b40bb82a5d42a7649aeb1f5856888-merged.mount: Deactivated successfully.
Dec  6 05:27:18 np0005548915 podman[297381]: 2025-12-06 10:27:18.730798192 +0000 UTC m=+0.531011543 container remove fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shirley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 05:27:18 np0005548915 systemd[1]: libpod-conmon-fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac.scope: Deactivated successfully.
Dec  6 05:27:18 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18615 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:18 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1389: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec  6 05:27:18 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:18.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:27:19 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:19 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:19 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:19.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:19 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26765 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:19 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28045 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:19 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18624 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:19 np0005548915 podman[297625]: 2025-12-06 10:27:19.341631018 +0000 UTC m=+0.043611838 container create 76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_euler, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  6 05:27:19 np0005548915 systemd[1]: Started libpod-conmon-76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734.scope.
Dec  6 05:27:19 np0005548915 podman[297625]: 2025-12-06 10:27:19.323918646 +0000 UTC m=+0.025899486 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:27:19 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:27:19 np0005548915 podman[297625]: 2025-12-06 10:27:19.441320204 +0000 UTC m=+0.143301054 container init 76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  6 05:27:19 np0005548915 podman[297625]: 2025-12-06 10:27:19.44923826 +0000 UTC m=+0.151219070 container start 76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_euler, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  6 05:27:19 np0005548915 podman[297625]: 2025-12-06 10:27:19.452799557 +0000 UTC m=+0.154780407 container attach 76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_euler, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  6 05:27:19 np0005548915 epic_euler[297648]: 167 167
Dec  6 05:27:19 np0005548915 systemd[1]: libpod-76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734.scope: Deactivated successfully.
Dec  6 05:27:19 np0005548915 podman[297625]: 2025-12-06 10:27:19.454978666 +0000 UTC m=+0.156959536 container died 76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_euler, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec  6 05:27:19 np0005548915 systemd[1]: var-lib-containers-storage-overlay-7178f928b1232a6bbfbf05efb44b66b6e99e266f44851b7d889e100752a48bca-merged.mount: Deactivated successfully.
Dec  6 05:27:19 np0005548915 podman[297625]: 2025-12-06 10:27:19.502627444 +0000 UTC m=+0.204608284 container remove 76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec  6 05:27:19 np0005548915 systemd[1]: libpod-conmon-76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734.scope: Deactivated successfully.
Dec  6 05:27:19 np0005548915 podman[297688]: 2025-12-06 10:27:19.678690649 +0000 UTC m=+0.046763965 container create b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_boyd, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:27:19 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec  6 05:27:19 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/753713519' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  6 05:27:19 np0005548915 systemd[1]: Started libpod-conmon-b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5.scope.
Dec  6 05:27:19 np0005548915 systemd[1]: Started libcrun container.
Dec  6 05:27:19 np0005548915 podman[297688]: 2025-12-06 10:27:19.656886285 +0000 UTC m=+0.024959631 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  6 05:27:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4b00c0e1b16b77cfa4c3275499bc340068323dca82e5e016565a82e100b652/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 05:27:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4b00c0e1b16b77cfa4c3275499bc340068323dca82e5e016565a82e100b652/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 05:27:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4b00c0e1b16b77cfa4c3275499bc340068323dca82e5e016565a82e100b652/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 05:27:19 np0005548915 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4b00c0e1b16b77cfa4c3275499bc340068323dca82e5e016565a82e100b652/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 05:27:19 np0005548915 podman[297688]: 2025-12-06 10:27:19.767781265 +0000 UTC m=+0.135854581 container init b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_boyd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:27:19 np0005548915 podman[297688]: 2025-12-06 10:27:19.774065397 +0000 UTC m=+0.142138693 container start b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  6 05:27:19 np0005548915 podman[297688]: 2025-12-06 10:27:19.776856053 +0000 UTC m=+0.144929349 container attach b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 05:27:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:27:20 np0005548915 nova_compute[254819]: 2025-12-06 10:27:20.279 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:20 np0005548915 lvm[297834]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:27:20 np0005548915 lvm[297834]: VG ceph_vg0 finished
Dec  6 05:27:20 np0005548915 lucid_boyd[297707]: {}
Dec  6 05:27:20 np0005548915 systemd[1]: libpod-b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5.scope: Deactivated successfully.
Dec  6 05:27:20 np0005548915 podman[297688]: 2025-12-06 10:27:20.585143746 +0000 UTC m=+0.953217062 container died b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_boyd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 05:27:20 np0005548915 systemd[1]: libpod-b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5.scope: Consumed 1.173s CPU time.
Dec  6 05:27:20 np0005548915 systemd[1]: var-lib-containers-storage-overlay-bd4b00c0e1b16b77cfa4c3275499bc340068323dca82e5e016565a82e100b652-merged.mount: Deactivated successfully.
Dec  6 05:27:20 np0005548915 podman[297688]: 2025-12-06 10:27:20.63778714 +0000 UTC m=+1.005860456 container remove b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  6 05:27:20 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:20 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:27:20 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:20.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:27:20 np0005548915 systemd[1]: libpod-conmon-b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5.scope: Deactivated successfully.
Dec  6 05:27:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  6 05:27:20 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:27:20 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  6 05:27:20 np0005548915 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:27:20 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1390: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec  6 05:27:20 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:27:20 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  6 05:27:21 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:27:21 np0005548915 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec  6 05:27:21 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:21 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:21 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:21.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:22 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:22 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:22 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:22.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:22 np0005548915 nova_compute[254819]: 2025-12-06 10:27:22.689 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:22 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1391: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec  6 05:27:22 np0005548915 ovs-vsctl[297919]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  6 05:27:23 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:23 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:23 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:23.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:23 np0005548915 virtqemud[254445]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  6 05:27:23 np0005548915 virtqemud[254445]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  6 05:27:23 np0005548915 virtqemud[254445]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  6 05:27:23 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:27:23 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:27:24
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.mgr', 'images', 'backups', 'default.rgw.meta', '.nfs', '.rgw.root']
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec  6 05:27:24 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: cache status {prefix=cache status} (starting...)
Dec  6 05:27:24 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:27:24 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: client ls {prefix=client ls} (starting...)
Dec  6 05:27:24 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:27:24 np0005548915 lvm[298256]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 05:27:24 np0005548915 lvm[298256]: VG ceph_vg0 finished
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  6 05:27:24 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:24 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:24 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:24.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28060 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:24 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1392: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec  6 05:27:25 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:25 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:25 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:25.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  6 05:27:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  6 05:27:25 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26783 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:27:25 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28072 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: damage ls {prefix=damage ls} (starting...)
Dec  6 05:27:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:27:25 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28078 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump loads {prefix=dump loads} (starting...)
Dec  6 05:27:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:27:25 np0005548915 nova_compute[254819]: 2025-12-06 10:27:25.281 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  6 05:27:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2493359472' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  6 05:27:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  6 05:27:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  6 05:27:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  6 05:27:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:27:25 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26801 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:25 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18669 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:25 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28105 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  6 05:27:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:27:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  6 05:27:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:27:25 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  6 05:27:25 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1789204808' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 05:27:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  6 05:27:25 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:27:25 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26813 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:25 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18687 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:25 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28132 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:26 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  6 05:27:26 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:27:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Dec  6 05:27:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3645085107' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  6 05:27:26 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  6 05:27:26 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:27:26 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26822 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:26 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18699 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:26 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: ops {prefix=ops} (starting...)
Dec  6 05:27:26 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:27:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec  6 05:27:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4011143574' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec  6 05:27:26 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28162 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:26 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:26 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:27:26 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:26.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:27:26 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1393: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:27:26 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec  6 05:27:26 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3308889397' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec  6 05:27:26 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28174 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:27 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:27 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:27 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:27.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:27 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28180 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:27 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: session ls {prefix=session ls} (starting...)
Dec  6 05:27:27 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec  6 05:27:27 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18738 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:27 np0005548915 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: status {prefix=status} (starting...)
Dec  6 05:27:27 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18756 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  6 05:27:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  6 05:27:27 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26870 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  6 05:27:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3505130251' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  6 05:27:27 np0005548915 nova_compute[254819]: 2025-12-06 10:27:27.721 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:27 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:27.756Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  6 05:27:27 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  6 05:27:27 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/837080414' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  6 05:27:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  6 05:27:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  6 05:27:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  6 05:27:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2329383087' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  6 05:27:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  6 05:27:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2397306980' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  6 05:27:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec  6 05:27:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3859338692' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec  6 05:27:28 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28252 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T10:27:28.416+0000 7f35ec3cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  6 05:27:28 np0005548915 ceph-mgr[74618]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  6 05:27:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  6 05:27:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3683697799' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  6 05:27:28 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18834 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T10:27:28.623+0000 7f35ec3cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  6 05:27:28 np0005548915 ceph-mgr[74618]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  6 05:27:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  6 05:27:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1751130465' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  6 05:27:28 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:28 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:27:28 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:28.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:27:28 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1394: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:27:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:28.849Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:27:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:28.849Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:27:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:28.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:27:28 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26921 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:28 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T10:27:28.911+0000 7f35ec3cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  6 05:27:28 np0005548915 ceph-mgr[74618]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  6 05:27:28 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec  6 05:27:28 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3275467326' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  6 05:27:29 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:29 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:29 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:29.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec  6 05:27:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/980335214' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  6 05:27:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  6 05:27:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472944739' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  6 05:27:29 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18879 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:29 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  6 05:27:29 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2837912887' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  6 05:27:29 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26963 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:29 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28321 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:29 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18894 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  6 05:27:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4240077914' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  6 05:27:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:27:30 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26969 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:30 np0005548915 nova_compute[254819]: 2025-12-06 10:27:30.284 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:30 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28342 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:30 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18924 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  6 05:27:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713374450' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  6 05:27:30 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26987 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:30 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:30 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:30 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:30.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:30 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28369 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990082 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.205703735s of 12.221550941s, submitted: 3
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf3a9000 session 0x55fce0e892c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2068800 session 0x55fce245f4a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 49.344425201s of 49.353523254s, submitted: 2
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f87400 session 0x55fce22ff860
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf1d9800 session 0x55fce1ed5680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989491 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989491 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.083337784s of 14.100935936s, submitted: 1
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989623 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989491 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988900 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.310416222s of 14.321680069s, submitted: 3
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2135800 session 0x55fce1f0ab40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 63.857799530s of 63.862625122s, submitted: 1
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988900 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18942 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.674135208s of 16.811328888s, submitted: 2
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce212f800 session 0x55fce1e64f00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9187 writes, 35K keys, 9187 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9187 writes, 2104 syncs, 4.37 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 776 writes, 1212 keys, 776 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s#012Interval WAL: 776 writes, 372 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce212a400 session 0x55fce2304000
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2128400 session 0x55fce232eb40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 114.850975037s of 114.855049133s, submitted: 1
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991924 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.075698853s of 12.083848000s, submitted: 2
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread fragmentation_score=0.000032 took=0.000044s
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f87400 session 0x55fce23ebe00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce212f800 session 0x55fce0f85680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 88.896926880s of 93.672317505s, submitted: 2
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991333 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992845 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992845 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.079085350s of 12.086176872s, submitted: 2
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 2211840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf1d9800 session 0x55fce0e87c20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 59.565917969s of 60.611633301s, submitted: 340
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993766 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993766 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993175 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.864642143s of 14.892098427s, submitted: 3
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1395: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 93.336196899s of 93.339126587s, submitted: 1
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996809 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87080960 unmapped: 18792448 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x973707/0xa32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 148 ms_handle_reset con 0x55fce2128400 session 0x55fcdf1c2b40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 18784256 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x973707/0xa32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [1])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87105536 unmapped: 27164672 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 27156480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x1175832/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 150 ms_handle_reset con 0x55fce212a400 session 0x55fcdfbad860
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115338 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb5d1000/0x0/0x4ffc00000, data 0x117793a/0x1239000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f8a000 session 0x55fcdfbb9e00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce2116000 session 0x55fce1c6cb40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f87400 session 0x55fcdfbac1e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.056060791s of 33.534233093s, submitted: 52
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117304 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118108 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118108 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.519258499s of 12.529978752s, submitted: 3
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce212f800 session 0x55fce2305a40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce2136000 session 0x55fce0f843c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f8b400 session 0x55fce236e000
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88227840 unmapped: 26042368 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120409 data_alloc: 218103808 data_used: 270336
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88227840 unmapped: 26042368 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f88c00 session 0x55fce1e9ba40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96247808 unmapped: 18022400 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce2069400 session 0x55fce23052c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96231424 unmapped: 18038784 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f88c00 session 0x55fce23c0b40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f8b400 session 0x55fcdf19e5a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce212f800 session 0x55fce23c1c20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce2136000 session 0x55fce0f87860
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f88400 session 0x55fcdfbb63c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f88c00 session 0x55fce23bda40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177895 data_alloc: 218103808 data_used: 7086080
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f8b400 session 0x55fce112d680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb1fc000/0x0/0x4ffc00000, data 0x1547bbd/0x160f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce212f800 session 0x55fce23c01e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.694509506s of 10.920597076s, submitted: 65
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce2136000 session 0x55fce1e9a960
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb1d8000/0x0/0x4ffc00000, data 0x156bbcd/0x1634000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96911360 unmapped: 17358848 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb1d8000/0x0/0x4ffc00000, data 0x156bbcd/0x1634000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96911360 unmapped: 17358848 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13844480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211407 data_alloc: 234881024 data_used: 11067392
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13844480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13844480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fb1d4000/0x0/0x4ffc00000, data 0x156db9f/0x1637000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128400 session 0x55fcdf1e0b40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce22ff4a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211407 data_alloc: 234881024 data_used: 11067392
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fb1d4000/0x0/0x4ffc00000, data 0x156db9f/0x1637000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.842867851s of 12.866385460s, submitted: 19
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101531648 unmapped: 12738560 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239575 data_alloc: 234881024 data_used: 11247616
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 102121472 unmapped: 12148736 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243807 data_alloc: 234881024 data_used: 11247616
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.411628723s of 10.562047958s, submitted: 38
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247415 data_alloc: 234881024 data_used: 11251712
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246824 data_alloc: 234881024 data_used: 11251712
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 12541952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 12541952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246101 data_alloc: 234881024 data_used: 11251712
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 12541952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212f800 session 0x55fce112c3c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 11771904 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.780336380s of 11.795572281s, submitted: 4
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2131800 session 0x55fce0f852c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136c00 session 0x55fce1e9be00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f86400 session 0x55fce0f803c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce1a01a40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d800 session 0x55fcdeddcf00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b53000/0x0/0x4ffc00000, data 0x1a4fb9f/0x1b19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 9502720 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b53000/0x0/0x4ffc00000, data 0x1a4fb9f/0x1b19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 9502720 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312002 data_alloc: 234881024 data_used: 11780096
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94d0000/0x0/0x4ffc00000, data 0x20d2b9f/0x219c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce1f0b2c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce245d4a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94d0000/0x0/0x4ffc00000, data 0x20d2b9f/0x219c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1efd800 session 0x55fce0f841e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fcdf1e0f00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314613 data_alloc: 234881024 data_used: 11780096
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 9469952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ce000/0x0/0x4ffc00000, data 0x20d2bd2/0x219e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 9469952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 2572288 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364145 data_alloc: 234881024 data_used: 19132416
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ce000/0x0/0x4ffc00000, data 0x20d2bd2/0x219e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ce000/0x0/0x4ffc00000, data 0x20d2bd2/0x219e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364145 data_alloc: 234881024 data_used: 19132416
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 2531328 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 2531328 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8b400 session 0x55fce1c6de00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f88c00 session 0x55fce1e64000
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.690454483s of 20.819118500s, submitted: 30
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 958464 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114335744 unmapped: 3080192 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 2711552 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418941 data_alloc: 234881024 data_used: 19755008
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb6000/0x0/0x4ffc00000, data 0x26e9bd2/0x27b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 2678784 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 2678784 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb6000/0x0/0x4ffc00000, data 0x26e9bd2/0x27b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 2646016 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 2646016 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417293 data_alloc: 234881024 data_used: 19755008
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb4000/0x0/0x4ffc00000, data 0x26ecbd2/0x27b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb4000/0x0/0x4ffc00000, data 0x26ecbd2/0x27b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.662245750s of 10.842704773s, submitted: 64
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce0f80f00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce19c1a40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce23bc3c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce8000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258778 data_alloc: 234881024 data_used: 11780096
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212a400 session 0x55fcdfbad680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce1e65680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce8000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdf19fe00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f86c00 session 0x55fce1c6d2c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109387776 unmapped: 8028160 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce8000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,1])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce2305860
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173167 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173167 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.095330238s of 13.319671631s, submitted: 68
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172284 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171693 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171693 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.733273506s of 15.748806000s, submitted: 4
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171561 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce1f0a1e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f88c00 session 0x55fce1f0a5a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce1f0b0e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8b400 session 0x55fcdeddd860
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8b400 session 0x55fce19ff680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98e3000/0x0/0x4ffc00000, data 0x1cc2b1d/0x1d89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdfbb7680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260916 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f88c00 session 0x55fcdf1c2f00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98e3000/0x0/0x4ffc00000, data 0x1cc2b1d/0x1d89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fcdfbb6d20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdf1dad20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.256204605s of 10.393723488s, submitted: 26
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 27320320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 27320320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 21430272 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340884 data_alloc: 234881024 data_used: 19124224
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 21430272 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98bf000/0x0/0x4ffc00000, data 0x1ce6b1d/0x1dad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 21430272 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98bf000/0x0/0x4ffc00000, data 0x1ce6b1d/0x1dad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340884 data_alloc: 234881024 data_used: 19124224
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98bf000/0x0/0x4ffc00000, data 0x1ce6b1d/0x1dad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.100857735s of 12.104346275s, submitted: 1
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407304 data_alloc: 234881024 data_used: 19488768
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 20488192 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9033000/0x0/0x4ffc00000, data 0x2572b1d/0x2639000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415702 data_alloc: 234881024 data_used: 19476480
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 18857984 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0x2611b1d/0x26d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414798 data_alloc: 234881024 data_used: 19476480
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0x2611b1d/0x26d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.115612030s of 12.360255241s, submitted: 80
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414878 data_alloc: 234881024 data_used: 19476480
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8e000/0x0/0x4ffc00000, data 0x2617b1d/0x26de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8e000/0x0/0x4ffc00000, data 0x2617b1d/0x26de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414966 data_alloc: 234881024 data_used: 19476480
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8b000/0x0/0x4ffc00000, data 0x261ab1d/0x26e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 18628608 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8b000/0x0/0x4ffc00000, data 0x261ab1d/0x26e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 18587648 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 18587648 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.287870407s of 13.304501534s, submitted: 4
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415814 data_alloc: 234881024 data_used: 19484672
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdf1d6960
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdfe7ed20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 18407424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce19fe000
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.696674347s of 28.839307785s, submitted: 37
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce232f0e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdf1c2000
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdfe7f4a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce2101c20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce20f7680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191346 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce20f70e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f90000/0x0/0x4ffc00000, data 0x1205b1d/0x12cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce20f6780
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce20f6960
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193160 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce20f74a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce210ad20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce23ea960
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 26378240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 26378240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199796 data_alloc: 218103808 data_used: 8167424
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199796 data_alloc: 218103808 data_used: 8167424
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.855402946s of 17.599184036s, submitted: 5
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 25788416 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226656 data_alloc: 218103808 data_used: 8298496
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226656 data_alloc: 218103808 data_used: 8298496
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.634870529s of 11.777306557s, submitted: 33
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226524 data_alloc: 218103808 data_used: 8298496
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226524 data_alloc: 218103808 data_used: 8298496
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1048400 session 0x55fcdfeab4a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fcdf1d6000
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdf1d1860
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce19fc780
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdff170e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256818 data_alloc: 218103808 data_used: 8298496
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 24870912 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117800 session 0x55fcdf1e0d20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fcdf1d63c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 24870912 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdf1d7680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.944304466s of 14.069879532s, submitted: 42
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdff16b40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 24870912 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282190 data_alloc: 234881024 data_used: 11317248
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282190 data_alloc: 234881024 data_used: 11317248
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.114115715s of 12.152852058s, submitted: 12
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 18767872 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387114 data_alloc: 234881024 data_used: 12939264
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116105216 unmapped: 17719296 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c60000/0x0/0x4ffc00000, data 0x2531bc2/0x25fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 17235968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 17235968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c60000/0x0/0x4ffc00000, data 0x2531bc2/0x25fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 17227776 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 17227776 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fcdf1d05a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401348 data_alloc: 234881024 data_used: 13160448
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 17219584 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c60000/0x0/0x4ffc00000, data 0x2531bc2/0x25fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c3f000/0x0/0x4ffc00000, data 0x2552bc2/0x261d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 2905 syncs, 3.80 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1859 writes, 5432 keys, 1859 commit groups, 1.0 writes per commit group, ingest: 5.24 MB, 0.01 MB/s#012Interval WAL: 1859 writes, 801 syncs, 2.32 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117800 session 0x55fcdff17860
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdfbb7680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.462458611s of 10.115522385s, submitted: 125
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239349 data_alloc: 218103808 data_used: 8298496
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce0f87e00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238137 data_alloc: 218103808 data_used: 8298496
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce1c6d2c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdfbb9860
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdfbb9e00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206500 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.026257515s of 12.886064529s, submitted: 81
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207421 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207289 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207289 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207289 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce19c05a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fce19c14a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdedddc20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce19fc3c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.002244949s of 23.143316269s, submitted: 3
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215687 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce19fdc20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fcdf1dab40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce19c03c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdf1e0960
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdff5c1e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f93000/0x0/0x4ffc00000, data 0x1201b2d/0x12c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce1a005a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218937 data_alloc: 218103808 data_used: 7618560
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f93000/0x0/0x4ffc00000, data 0x1201b2d/0x12c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 22364160 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 22364160 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 22364160 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222889 data_alloc: 218103808 data_used: 8151040
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222889 data_alloc: 218103808 data_used: 8151040
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 22347776 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.405853271s of 17.451101303s, submitted: 13
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115630080 unmapped: 18194432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 19800064 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 20258816 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9476000/0x0/0x4ffc00000, data 0x1d1db50/0x1de6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308431 data_alloc: 218103808 data_used: 8380416
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 20258816 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 20258816 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9476000/0x0/0x4ffc00000, data 0x1d1db50/0x1de6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307631 data_alloc: 218103808 data_used: 8380416
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9474000/0x0/0x4ffc00000, data 0x1d1fb50/0x1de8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9474000/0x0/0x4ffc00000, data 0x1d1fb50/0x1de8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307631 data_alloc: 218103808 data_used: 8380416
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.168425560s of 13.361434937s, submitted: 78
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9473000/0x0/0x4ffc00000, data 0x1d20b50/0x1de9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307855 data_alloc: 218103808 data_used: 8380416
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9473000/0x0/0x4ffc00000, data 0x1d20b50/0x1de9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 20242432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 20242432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9472000/0x0/0x4ffc00000, data 0x1d21b50/0x1dea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 20242432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307695 data_alloc: 218103808 data_used: 8380416
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 20234240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 20234240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 20234240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.022357941s of 12.031913757s, submitted: 2
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9471000/0x0/0x4ffc00000, data 0x1d22b50/0x1deb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 20217856 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9471000/0x0/0x4ffc00000, data 0x1d22b50/0x1deb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 20217856 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307703 data_alloc: 218103808 data_used: 8380416
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 20217856 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce11130e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce19fd4a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fcdfea7e00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fcdfea6960
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdfeeac00 session 0x55fce19c05a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 24076288 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd7000/0x0/0x4ffc00000, data 0x24bbb79/0x2585000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 24076288 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd7000/0x0/0x4ffc00000, data 0x24bbbb2/0x2585000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 24076288 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1372406 data_alloc: 218103808 data_used: 8380416
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fce19c03c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd4000/0x0/0x4ffc00000, data 0x24bcbb2/0x2586000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418158 data_alloc: 234881024 data_used: 15044608
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd4000/0x0/0x4ffc00000, data 0x24bcbb2/0x2586000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418158 data_alloc: 234881024 data_used: 15044608
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.528182983s of 17.646516800s, submitted: 39
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd3000/0x0/0x4ffc00000, data 0x24bdbb2/0x2587000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 19562496 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 17809408 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484588 data_alloc: 234881024 data_used: 15458304
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f85da000/0x0/0x4ffc00000, data 0x2bb5bb2/0x2c7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1485044 data_alloc: 234881024 data_used: 15536128
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.188508034s of 10.414656639s, submitted: 108
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f85dc000/0x0/0x4ffc00000, data 0x2bb6bb2/0x2c80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479996 data_alloc: 234881024 data_used: 15540224
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 17063936 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fce1a012c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce19c1a40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f85da000/0x0/0x4ffc00000, data 0x2bb6bb2/0x2c80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 17055744 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce210ad20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x1d25b50/0x1dee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320544 data_alloc: 218103808 data_used: 8380416
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.276282310s of 11.366744995s, submitted: 33
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320712 data_alloc: 218103808 data_used: 8380416
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x1d25b50/0x1dee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fcdfb2d0e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fce21010e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce1c6c3c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb40/0x1247000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228930 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228930 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228930 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.033533096s of 23.158624649s, submitted: 40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228638 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115580928 unmapped: 21397504 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115703808 unmapped: 21274624 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,1,0,1])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce1a010e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212a400 session 0x55fce1116000
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 21127168 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 21127168 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228638 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 17793024 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fcdf1d6b40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fce19fc780
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce23043c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fce1e64780
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212a400 session 0x55fce1e65a40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21078016 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286739 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21078016 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21078016 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 21069824 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.803172112s of 13.002218246s, submitted: 386
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 21069824 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 21069824 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fcdeddda40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287696 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 21045248 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 19922944 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335992 data_alloc: 234881024 data_used: 14737408
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fce19fef00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fce23bc1e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce210a5a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232510 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.927818298s of 17.092643738s, submitted: 51
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232378 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232378 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117c00 session 0x55fce20f72c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c60c00 session 0x55fce23050e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c60c00 session 0x55fce20f6000
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 24641536 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fcdfb2dc20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fce0f863c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 24633344 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ca000/0x0/0x4ffc00000, data 0x1ccab7f/0x1d92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117c00 session 0x55fce23c14a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ca000/0x0/0x4ffc00000, data 0x1ccab7f/0x1d92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325714 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.299762726s of 10.467995644s, submitted: 46
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 23732224 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce23bc000
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fcdf1dab40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120152064 unmapped: 21028864 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce19fde00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1e000 session 0x55fce112c000
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19e3000 session 0x55fce23bd0e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce112d680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce0f87680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.559396744s of 28.672395706s, submitted: 41
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce19c1c20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19e3000 session 0x55fcdeddc5a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1e000 session 0x55fce0f86000
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce1fae960
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce0e89a40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291217 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b35000/0x0/0x4ffc00000, data 0x1660b1d/0x1727000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce2101c20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce0f865a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19e3000 session 0x55fce23eaf00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1e000 session 0x55fce112dc20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293031 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 25018368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317675 data_alloc: 234881024 data_used: 11227136
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce1e650e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce1e652c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c60800 session 0x55fcdf1c3c20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317675 data_alloc: 234881024 data_used: 11227136
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: mgrc ms_handle_reset ms_handle_reset con 0x55fcdfeeb800
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3885409716
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3885409716,v1:192.168.122.100:6801/3885409716]
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: mgrc handle_mgr_configure stats_period=5
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f6f400 session 0x55fce245f680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.341133118s of 18.417297363s, submitted: 30
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 16695296 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fcdf1e10e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 18685952 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423799 data_alloc: 234881024 data_used: 11702272
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8dd8000/0x0/0x4ffc00000, data 0x23bbb2d/0x2483000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8dd8000/0x0/0x4ffc00000, data 0x23bbb2d/0x2483000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8dd8000/0x0/0x4ffc00000, data 0x23bbb2d/0x2483000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420867 data_alloc: 234881024 data_used: 11702272
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.104361534s of 11.437482834s, submitted: 145
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8db8000/0x0/0x4ffc00000, data 0x23dcb2d/0x24a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423207 data_alloc: 234881024 data_used: 11714560
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fcdfbb8960
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fcdf1e1c20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce232ed20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98f2000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256266 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257062 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.118231773s of 13.227775574s, submitted: 42
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 21962752 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 21962752 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 21962752 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256339 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256339 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 21946368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 21946368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 21946368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce19c05a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce0f863c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce0f872c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce0f87e00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.562622070s of 12.571432114s, submitted: 2
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 20856832 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce0f86d20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce20f72c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce19fef00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce236ef00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce19c14a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317200 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988c000/0x0/0x4ffc00000, data 0x1907b8f/0x19d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fcdf1e05a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 20815872 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319610 data_alloc: 218103808 data_used: 7618560
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 20815872 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdff17c20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce19fe000
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x1907bb2/0x19d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373114 data_alloc: 234881024 data_used: 15515648
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x1907bb2/0x19d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x1907bb2/0x19d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373114 data_alloc: 234881024 data_used: 15515648
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce1112d20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce210a960
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fce2101860
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce0e86d20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.591188431s of 16.790163040s, submitted: 37
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fcdfbb9e00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce20f6f00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1efcc00 session 0x55fce1e64960
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdf19fe00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fcdf19f680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 18300928 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 14155776 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 14155776 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 128925696 unmapped: 12255232 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515640 data_alloc: 234881024 data_used: 15814656
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8312000/0x0/0x4ffc00000, data 0x2a61bc1/0x2b2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 128925696 unmapped: 12255232 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce0f86f00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 129261568 unmapped: 11919360 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 129269760 unmapped: 11911168 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 9068544 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 8937472 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543820 data_alloc: 234881024 data_used: 20639744
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f82fc000/0x0/0x4ffc00000, data 0x2a85bc1/0x2b50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f82fc000/0x0/0x4ffc00000, data 0x2a85bc1/0x2b50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543229 data_alloc: 234881024 data_used: 20639744
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 8896512 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 8896512 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 8896512 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f82fc000/0x0/0x4ffc00000, data 0x2a85bc1/0x2b50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.123311996s of 17.423311234s, submitted: 113
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 7577600 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 7888896 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1607863 data_alloc: 234881024 data_used: 20910080
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 8241152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c23000/0x0/0x4ffc00000, data 0x315ebc1/0x3229000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 8208384 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 8208384 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 8200192 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 8200192 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c23000/0x0/0x4ffc00000, data 0x315ebc1/0x3229000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1607863 data_alloc: 234881024 data_used: 20910080
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c02000/0x0/0x4ffc00000, data 0x317fbc1/0x324a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1606903 data_alloc: 234881024 data_used: 20910080
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133021696 unmapped: 8159232 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.647055626s of 12.847999573s, submitted: 62
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 8093696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c02000/0x0/0x4ffc00000, data 0x317fbc1/0x324a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 8093696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 8093696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212e800 session 0x55fcdfea7e00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c56000 session 0x55fce19fc3c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 8085504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdf1d63c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1480191 data_alloc: 234881024 data_used: 15818752
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f87e9000/0x0/0x4ffc00000, data 0x2599bb2/0x2663000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f87e9000/0x0/0x4ffc00000, data 0x2599bb2/0x2663000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1480191 data_alloc: 234881024 data_used: 15818752
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.591730118s of 10.648483276s, submitted: 22
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fcdfbb7a40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce1c6d2c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f87e9000/0x0/0x4ffc00000, data 0x2599bb2/0x2663000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fcdeddd860
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124911616 unmapped: 16269312 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124911616 unmapped: 16269312 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.378582001s of 26.526098251s, submitted: 53
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce19c05a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fce1a01c20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fcde5783c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce112dc20
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce20f7680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 22183936 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 22183936 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323411 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 22183936 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 22175744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9673000/0x0/0x4ffc00000, data 0x1712b1d/0x17d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce1e652c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 22175744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fce1e650e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9673000/0x0/0x4ffc00000, data 0x1712b1d/0x17d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 22175744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce19c03c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fcdf1d6000
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 22151168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326386 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 22151168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125796352 unmapped: 20709376 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365278 data_alloc: 234881024 data_used: 13447168
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365278 data_alloc: 234881024 data_used: 13447168
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.411964417s of 18.474147797s, submitted: 12
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 18644992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91f5000/0x0/0x4ffc00000, data 0x1b8fb2d/0x1c57000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127901696 unmapped: 18604032 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127901696 unmapped: 18604032 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404590 data_alloc: 234881024 data_used: 13590528
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127901696 unmapped: 18604032 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ef000/0x0/0x4ffc00000, data 0x1b95b2d/0x1c5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127909888 unmapped: 18595840 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127909888 unmapped: 18595840 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 18661376 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ef000/0x0/0x4ffc00000, data 0x1b95b2d/0x1c5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 18661376 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403294 data_alloc: 234881024 data_used: 13590528
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403294 data_alloc: 234881024 data_used: 13590528
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 18644992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fcdfea65a0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.284329414s of 15.530404091s, submitted: 41
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212e800 session 0x55fcdeddcb40
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123281408 unmapped: 23224320 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdf1d1680
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2137800 session 0x55fce23bd2c0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce112cf00
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce20f70e0
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:30] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:27:30 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:30] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.873546600s of 35.962779999s, submitted: 29
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce19fe780
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.201023102s of 21.206556320s, submitted: 1
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288123 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288123 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.126986504s of 10.132149696s, submitted: 1
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289767 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289767 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'config diff' '{prefix=config diff}'
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'config show' '{prefix=config show}'
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.665910721s of 16.765491486s, submitted: 2
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'counter dump' '{prefix=counter dump}'
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'counter schema' '{prefix=counter schema}'
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 23117824 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123453440 unmapped: 23052288 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'log dump' '{prefix=log dump}'
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 34078720 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'perf dump' '{prefix=perf dump}'
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'perf schema' '{prefix=perf schema}'
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123805696 unmapped: 33742848 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123805696 unmapped: 33742848 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123805696 unmapped: 33742848 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123805696 unmapped: 33742848 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 33685504 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 33685504 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 13K writes, 49K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 13K writes, 4027 syncs, 3.40 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2640 writes, 8560 keys, 2640 commit groups, 1.0 writes per commit group, ingest: 7.62 MB, 0.01 MB/s#012Interval WAL: 2640 writes, 1122 syncs, 2.35 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 33685504 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 33685504 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 33685504 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123641856 unmapped: 33906688 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123641856 unmapped: 33906688 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123641856 unmapped: 33906688 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123641856 unmapped: 33906688 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123641856 unmapped: 33906688 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 272.928344727s of 272.932678223s, submitted: 1
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123691008 unmapped: 33857536 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123748352 unmapped: 33800192 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [0,0,0,0,2])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 33767424 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 33480704 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 33480704 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 33480704 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 33480704 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 33480704 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 33480704 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 33480704 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124100608 unmapped: 33447936 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'config diff' '{prefix=config diff}'
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'config show' '{prefix=config show}'
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'counter dump' '{prefix=counter dump}'
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'counter schema' '{prefix=counter schema}'
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 33431552 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec  6 05:27:30 np0005548915 ceph-osd[82803]: do_command 'log dump' '{prefix=log dump}'
Dec  6 05:27:30 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  6 05:27:30 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2953526502' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  6 05:27:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27002 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:31 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:31 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:31 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:31.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:31 np0005548915 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 05:27:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18963 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28390 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  6 05:27:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/435042096' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  6 05:27:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27017 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18978 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28402 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:31 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  6 05:27:31 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3054406261' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  6 05:27:31 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27038 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18999 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28411 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27056 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:32 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec  6 05:27:32 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/390884467' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec  6 05:27:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19020 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19026 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27077 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:32 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:32 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:32 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:32.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:32 np0005548915 nova_compute[254819]: 2025-12-06 10:27:32.764 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:32 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1396: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:27:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19047 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:32 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28444 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:33 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:33 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:27:33 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:33.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:27:33 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27083 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Dec  6 05:27:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1154383948' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec  6 05:27:33 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19068 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:33 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28468 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:33 np0005548915 podman[299738]: 2025-12-06 10:27:33.43426853 +0000 UTC m=+0.061100586 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 05:27:33 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27101 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:33 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19080 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:33 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28492 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:33 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19095 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:33 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27119 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:33 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec  6 05:27:33 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2986543317' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec  6 05:27:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec  6 05:27:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3294404057' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  6 05:27:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec  6 05:27:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3984842228' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec  6 05:27:34 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:34 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:34 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:34.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec  6 05:27:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2266947748' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec  6 05:27:34 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1397: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:27:34 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec  6 05:27:34 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2863449111' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec  6 05:27:35 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:35 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:35 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:35.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:27:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec  6 05:27:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2031727652' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec  6 05:27:35 np0005548915 nova_compute[254819]: 2025-12-06 10:27:35.286 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec  6 05:27:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3465320604' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec  6 05:27:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec  6 05:27:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/832528068' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec  6 05:27:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec  6 05:27:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2213913475' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec  6 05:27:35 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  6 05:27:35 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2487044674' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  6 05:27:36 np0005548915 systemd[1]: Starting Hostname Service...
Dec  6 05:27:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec  6 05:27:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3329454876' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec  6 05:27:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec  6 05:27:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2938496044' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec  6 05:27:36 np0005548915 systemd[1]: Started Hostname Service.
Dec  6 05:27:36 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:36 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 05:27:36 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:36.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 05:27:36 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec  6 05:27:36 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1352424821' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec  6 05:27:36 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28621 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:36 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19245 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:36 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1398: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:27:37 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:37 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:37 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:37.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:37 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec  6 05:27:37 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637026852' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec  6 05:27:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28639 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19260 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28648 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27251 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19266 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27257 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28666 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:37.759Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:27:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:37.760Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:27:37 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:37.760Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:27:37 np0005548915 nova_compute[254819]: 2025-12-06 10:27:37.765 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27269 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:37 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19287 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:38 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27281 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec  6 05:27:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1722450257' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec  6 05:27:38 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28684 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:38 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19305 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:38 np0005548915 podman[300494]: 2025-12-06 10:27:38.475004306 +0000 UTC m=+0.110354916 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 05:27:38 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27296 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:38 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28705 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Dec  6 05:27:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2754842451' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec  6 05:27:38 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:38 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:38 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:38.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:38 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1399: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  6 05:27:38 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19323 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:38.850Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  6 05:27:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:38.851Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:27:38 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:38.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  6 05:27:38 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27311 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:38 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28723 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:38 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  6 05:27:38 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  6 05:27:39 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:39 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:39 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:39.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec  6 05:27:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3336786673' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec  6 05:27:39 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19338 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:39 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19341 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:39 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28741 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  6 05:27:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  6 05:27:39 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec  6 05:27:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979283974' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec  6 05:27:39 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19365 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  6 05:27:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  6 05:27:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  6 05:27:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  6 05:27:39 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27341 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:39 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28771 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  6 05:27:39 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  6 05:27:40 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19407 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 05:27:40 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27356 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  6 05:27:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  6 05:27:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  6 05:27:40 np0005548915 nova_compute[254819]: 2025-12-06 10:27:40.288 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 05:27:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  6 05:27:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  6 05:27:40 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:40 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 05:27:40 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:40.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 05:27:40 np0005548915 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1400: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  6 05:27:40 np0005548915 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:40] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  6 05:27:40 np0005548915 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:40] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  6 05:27:40 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Dec  6 05:27:40 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2258838112' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec  6 05:27:41 np0005548915 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec  6 05:27:41 np0005548915 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 05:27:41 np0005548915 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:41.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 05:27:41 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28891 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:41 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19473 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:41 np0005548915 podman[300970]: 2025-12-06 10:27:41.474010786 +0000 UTC m=+0.097967089 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 05:27:41 np0005548915 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27455 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  6 05:27:41 np0005548915 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec  6 05:27:41 np0005548915 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2577467961' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
